Sample records for single exponential distribution

  1. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  2. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  3. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  4. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  5. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  6. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  7. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  8. Multi-step rhodopsin inactivation schemes can account for the size variability of single photon responses in Limulus ventral photoreceptors

    PubMed Central

    1994-01-01

    Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085

  9. Weighted Scaling in Non-growth Random Networks

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li

    2012-09-01

    We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

  10. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  11. Anomalous yet Brownian.

    PubMed

    Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve

    2009-09-08

    We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.

  12. The size distribution of Pacific Seamounts

    NASA Astrophysics Data System (ADS)

    Smith, Deborah K.; Jordan, Thomas H.

    1987-11-01

    An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.

  13. Cell responses to single pheromone molecules may reflect the activation kinetics of olfactory receptor molecules.

    PubMed

    Minor, A V; Kaissling, K-E

    2003-03-01

    Olfactory receptor cells of the silkmoth Bombyx mori respond to single pheromone molecules with "elementary" electrical events that appear as discrete "bumps" a few milliseconds in duration, or bursts of bumps. As revealed by simulation, one bump may result from a series of random openings of one or several ion channels, producing an average inward membrane current of 1.5 pA. The distributions of durations of bumps and of gaps between bumps in a burst can be fitted by single exponentials with time constants of 10.2 ms and 40.5 ms, respectively. The distribution of burst durations is a sum of two exponentials; the number of bumps per burst obeyed a geometric distribution (mean 3.2 bumps per burst). Accordingly the elementary events could reflect transitions among three states of the pheromone receptor molecule: the vacant receptor (state 1), the pheromone-receptor complex (state 2), and the activated complex (state 3). The calculated rate constants of the transitions between states are k(21)=7.7 s(-1), k(23)=16.8 s(-1), and k(32)=98 s(-1).

  14. How extreme are extremes?

    NASA Astrophysics Data System (ADS)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2016-04-01

    High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude

  15. Diversity of individual mobility patterns and emergence of aggregated scaling laws

    PubMed Central

    Yan, Xiao-Yong; Han, Xiao-Pu; Wang, Bing-Hong; Zhou, Tao

    2013-01-01

    Uncovering human mobility patterns is of fundamental importance to the understanding of epidemic spreading, urban transportation and other socioeconomic dynamics embodying spatiality and human travel. According to the direct travel diaries of volunteers, we show the absence of scaling properties in the displacement distribution at the individual level,while the aggregated displacement distribution follows a power law with an exponential cutoff. Given the constraint on total travelling cost, this aggregated scaling law can be analytically predicted by the mixture nature of human travel under the principle of maximum entropy. A direct corollary of such theory is that the displacement distribution of a single mode of transportation should follow an exponential law, which also gets supportive evidences in known data. We thus conclude that the travelling cost shapes the displacement distribution at the aggregated level. PMID:24045416

  16. Bayesian view of single-qubit clocks, and an energy versus accuracy tradeoff

    NASA Astrophysics Data System (ADS)

    Gopalkrishnan, Manoj; Kandula, Varshith; Sriram, Praveen; Deshpande, Abhishek; Muralidharan, Bhaskaran

    2017-09-01

    We bring a Bayesian approach to the analysis of clocks. Using exponential distributions as priors for clocks, we analyze how well one can keep time with a single qubit freely precessing under a magnetic field. We find that, at least with a single qubit, quantum mechanics does not allow exact timekeeping, in contrast to classical mechanics, which does. We find the design of the single-qubit clock that leads to maximum accuracy. Further, we find an energy versus accuracy tradeoff—the energy cost is at least kBT times the improvement in accuracy as measured by the entropy reduction in going from the prior distribution to the posterior distribution. We propose a physical realization of the single-qubit clock using charge transport across a capacitively coupled quantum dot.

  17. Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment

    NASA Astrophysics Data System (ADS)

    Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok

    2018-05-01

    We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.

  18. Single-channel activations and concentration jumps: comparison of recombinant NR1a/NR2A and NR1a/NR2D NMDA receptors

    PubMed Central

    Wyllie, David J A; Béhé, Philippe; Colquhoun, David

    1998-01-01

    We have expressed recombinant NR1a/NR2A and NR1a/NR2D N-methyl-D-aspartate (NMDA) receptor channels in Xenopus oocytes and made recordings of single-channel and macroscopic currents in outside-out membrane patches. For each receptor type we measured (a) the individual single-channel activations evoked by low glutamate concentrations in steady-state recordings, and (b) the macroscopic responses elicited by brief concentration jumps with high agonist concentrations, and we explore the relationship between these two sorts of observation. Low concentration (5–100 nM) steady-state recordings of NR1a/NR2A and NR1a/NR2D single-channel activity generated shut-time distributions that were best fitted with a mixture of five and six exponential components, respectively. Individual activations of either receptor type were resolved as bursts of openings, which we refer to as ‘super-clusters’. During a single activation, NR1a/NR2A receptors were open for 36 % of the time, but NR1a/NR2D receptors were open for only 4 % of the time. For both, distributions of super-cluster durations were best fitted with a mixture of six exponential components. Their overall mean durations were 35.8 and 1602 ms, respectively. Steady-state super-clusters were aligned on their first openings and averaged. The average was well fitted by a sum of exponentials with time constants taken from fits to super-cluster length distributions. It is shown that this is what would be expected for a channel that shows simple Markovian behaviour. The current through NR1a/NR2A channels following a concentration jump from zero to 1 mM glutamate for 1 ms was well fitted by three exponential components with time constants of 13 ms (rising phase), 70 ms and 350 ms (decaying phase). Similar concentration jumps on NR1a/NR2D channels were well fitted by two exponentials with means of 45 ms (rising phase) and 4408 ms (decaying phase) components. During prolonged exposure to glutamate, NR1a/NR2A channels desensitized with a time constant of 649 ms, while NR1a/NR2D channels exhibited no apparent desensitization. We show that under certain conditions, the time constants for the macroscopic jump response should be the same as those for the distribution of super-cluster lengths, though the resolution of the latter is so much greater that it cannot be expected that all the components will be resolvable in a macroscopic current. Good agreement was found for jumps on NR1a/NR2D receptors, and for some jump experiments on NR1a/NR2A. However, the latter were rather variable and some were slower than predicted. Slow decays were associated with patches that had large currents. PMID:9625862

  19. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  20. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  1. {phi} meson production in Au + Au and p + p collisions at {radical}s{sub NN}=200 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, J.; Adler, C.; Aggarwal, M.M.

    2004-06-01

    We report the STAR measurement of {psi} meson production in Au + Au and p + p collisions at {radical}s{sub NN} = 200 GeV. Using the event mixing technique, the {psi} spectra and yields are obtained at midrapidity for five centrality bins in Au+Au collisions and for non-singly-diffractive p+p collisions. It is found that the {psi} transverse momentum distributions from Au+Au collisions are better fitted with a single-exponential while the p+p spectrum is better described by a double-exponential distribution. The measured nuclear modification factors indicate that {psi} production in central Au+Au collisions is suppressed relative to peripheral collisions when scaledmore » by the number of binary collisions (). The systematics of versus centrality and the constant {psi}/K{sup -} ratio versus beam species, centrality, and collision energy rule out kaon coalescence as the dominant mechanism for {psi} production.« less

  2. Magnetic pattern at supergranulation scale: the void size distribution

    NASA Astrophysics Data System (ADS)

    Berrilli, F.; Scardigli, S.; Del Moro, D.

    2014-08-01

    The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.

  3. Science and Facebook: The same popularity law!

    PubMed

    Néda, Zoltán; Varga, Levente; Biró, Tamás S

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of "shares" for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4.

  4. Science and Facebook: The same popularity law!

    PubMed Central

    Varga, Levente; Biró, Tamás S.

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of “shares” for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4. PMID:28678796

  5. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  6. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    PubMed

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  8. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  9. A Simulation of the ECSS Help Desk with the Erlang a Model

    DTIC Science & Technology

    2011-03-01

    a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au

  10. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    PubMed Central

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  11. Anomalous NMR relaxation in cartilage matrix components and native cartilage: Fractional-order models

    NASA Astrophysics Data System (ADS)

    Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-06-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.

  12. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  13. On the gap between an empirical distribution and an exponential distribution of waiting times for price changes in a financial market

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya

    2007-03-01

    We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.

  14. Properties of single NMDA receptor channels in human dentate gyrus granule cells

    PubMed Central

    Lieberman, David N; Mody, Istvan

    1999-01-01

    Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689

  15. The social architecture of capitalism

    NASA Astrophysics Data System (ADS)

    Wright, Ian

    2005-02-01

    A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.

  16. Adaptive kanban control mechanism for a single-stage hybrid system

    NASA Astrophysics Data System (ADS)

    Korugan, Aybek; Gupta, Surendra M.

    2002-02-01

    In this paper, we consider a hybrid manufacturing system with two discrete production lines. Here the output of either production line can satisfy the demand for the same type of product without any penalties. The interarrival times for demand occurrences and service completions are exponentially distributed i.i.d. variables. In order to control this type of manufacturing system we suggest a single stage pull type control mechanism with adaptive kanbans and state independent routing of the production information.

  17. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  18. Analysis of domestic refrigerator temperatures and home storage time distributions for shelf-life studies and food safety risk assessment.

    PubMed

    Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie

    2017-06-01

    In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  20. Stability of Tsallis entropy and instabilities of Rényi and normalized Tsallis entropies: a basis for q-exponential distributions.

    PubMed

    Abe, Sumiyoshi

    2002-10-01

    The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.

  1. A Test of the Exponential Distribution for Stand Structure Definition in Uneven-aged Loblolly-Shortleaf Pine Stands

    Treesearch

    Paul A. Murphy; Robert M. Farrar

    1981-01-01

    In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.

  2. Tl+-induced micros gating of current indicates instability of the MaxiK selectivity filter as caused by ion/pore interaction.

    PubMed

    Schroeder, Indra; Hansen, Ulf-Peter

    2008-04-01

    Patch clamp experiments on single MaxiK channels expressed in HEK293 cells were performed at high temporal resolution (50-kHz filter) in asymmetrical solutions containing 0, 25, 50, or 150 mM Tl+ on the luminal or cytosolic side with [K+] + [Tl+] = 150 mM and 150 mM K+ on the other side. Outward current in the presence of cytosolic Tl+ did not show fast gating behavior that was significantly different from that in the absence of Tl+. With luminal Tl+ and at membrane potentials more negative than -40 mV, the single-channel current showed a negative slope resistance concomitantly with a flickery block, resulting in an artificially reduced apparent single-channel current I(app). The analysis of the amplitude histograms by beta distributions enabled the estimation of the true single-channel current and the determination of the rate constants of a simple two-state O-C Markov model for the gating in the bursts. The voltage dependence of the gating ratio R = I(true)/I(app) = (k(CO) + k(OC))/k(CO) could be described by exponential functions with different characteristic voltages above or below 50 mM Tl(+). The true single-channel current I(true) decreased with Tl+ concentrations up to 50 mM and stayed constant thereafter. Different models were considered. The most likely ones related the exponential increase of the gating ratio to ion depletion at the luminal side of the selectivity filter, whereas the influence of [Tl+] on the characteristic voltage of these exponential functions and of the value of I(true) were determined by [Tl+] at the inner side of the selectivity filter or in the cavity.

  3. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  4. Multirate parallel distributed compensation of a cluster in wireless sensor and actor networks

    NASA Astrophysics Data System (ADS)

    Yang, Chun-xi; Huang, Ling-yun; Zhang, Hao; Hua, Wang

    2016-01-01

    The stabilisation problem for one of the clusters with bounded multiple random time delays and packet dropouts in wireless sensor and actor networks is investigated in this paper. A new multirate switching model is constructed to describe the feature of this single input multiple output linear system. According to the difficulty of controller design under multi-constraints in multirate switching model, this model can be converted to a Takagi-Sugeno fuzzy model. By designing a multirate parallel distributed compensation, a sufficient condition is established to ensure this closed-loop fuzzy control system to be globally exponentially stable. The solution of the multirate parallel distributed compensation gains can be obtained by solving an auxiliary convex optimisation problem. Finally, two numerical examples are given to show, compared with solving switching controller, multirate parallel distributed compensation can be obtained easily. Furthermore, it has stronger robust stability than arbitrary switching controller and single-rate parallel distributed compensation under the same conditions.

  5. Kinetic market models with single commodity having price fluctuations

    NASA Astrophysics Data System (ADS)

    Chatterjee, A.; Chakrabarti, B. K.

    2006-12-01

    We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.

  6. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  7. Does the Australian desert ant Melophorus bagoti approximate a Lévy search by an intrinsic bi-modal walk?

    PubMed

    Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken

    2014-01-07

    We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.

  8. Preparation of an exponentially rising optical pulse for efficient excitation of single atoms in free space.

    PubMed

    Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian

    2012-08-01

    We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.

  9. Traction forces during collective cell motion.

    PubMed

    Gov, N S

    2009-08-01

    Collective motion of cell cultures is a process of great interest, as it occurs during morphogenesis, wound healing, and tumor metastasis. During these processes cell cultures move due to the traction forces induced by the individual cells on the surrounding matrix. A recent study [Trepat, et al. (2009). Nat. Phys. 5, 426-430] measured for the first time the traction forces driving collective cell migration and found that they arise throughout the cell culture. The leading 5-10 rows of cell do play a major role in directing the motion of the rest of the culture by having a distinct outwards traction. Fluctuations in the traction forces are an order of magnitude larger than the resultant directional traction at the culture edge and, furthermore, have an exponential distribution. Such exponential distributions are observed for the sizes of adhesion domains within cells, the traction forces produced by single cells, and even in nonbiological nonequilibrium systems, such as sheared granular materials. We discuss these observations and their implications for our understanding of cellular flows within a continuous culture.

  10. A computer program for thermal radiation from gaseous rocket exhuast plumes (GASRAD)

    NASA Technical Reports Server (NTRS)

    Reardon, J. E.; Lee, Y. C.

    1979-01-01

    A computer code is presented for predicting incident thermal radiation from defined plume gas properties in either axisymmetric or cylindrical coordinate systems. The radiation model is a statistical band model for exponential line strength distribution with Lorentz/Doppler line shapes for 5 gaseous species (H2O, CO2, CO, HCl and HF) and an appoximate (non-scattering) treatment of carbon particles. The Curtis-Godson approximation is used for inhomogeneous gases, but a subroutine is available for using Young's intuitive derivative method for H2O with Lorentz line shape and exponentially-tailed-inverse line strength distribution. The geometry model provides integration over a hemisphere with up to 6 individually oriented identical axisymmetric plumes, a single 3-D plume, Shading surfaces may be used in any of 7 shapes, and a conical limit may be defined for the plume to set individual line-of-signt limits. Intermediate coordinate systems may specified to simplify input of plumes and shading surfaces.

  11. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  12. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    NASA Astrophysics Data System (ADS)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.

  13. Single-qubit unitary gates by graph scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumer, Benjamin A.; Underwood, Michael S.; Feder, David L.

    2011-12-15

    We consider the effects of plane-wave states scattering off finite graphs as an approach to implementing single-qubit unitary operations within the continuous-time quantum walk framework of universal quantum computation. Four semi-infinite tails are attached at arbitrary points of a given graph, representing the input and output registers of a single qubit. For a range of momentum eigenstates, we enumerate all of the graphs with up to n=9 vertices for which the scattering implements a single-qubit gate. As n increases, the number of new unitary operations increases exponentially, and for n>6 the majority correspond to rotations about axes distributed roughly uniformlymore » across the Bloch sphere. Rotations by both rational and irrational multiples of {pi} are found.« less

  14. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  15. A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems

    NASA Astrophysics Data System (ADS)

    Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.

    2010-09-01

    We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.

  16. Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations

    PubMed Central

    Good, Benjamin H.; Rouzine, Igor M.; Balick, Daniel J.; Hallatschek, Oskar; Desai, Michael M.

    2012-01-01

    When large asexual populations adapt, competition between simultaneously segregating mutations slows the rate of adaptation and restricts the set of mutations that eventually fix. This phenomenon of interference arises from competition between mutations of different strengths as well as competition between mutations that arise on different fitness backgrounds. Previous work has explored each of these effects in isolation, but the way they combine to influence the dynamics of adaptation remains largely unknown. Here, we describe a theoretical model to treat both aspects of interference in large populations. We calculate the rate of adaptation and the distribution of fixed mutational effects accumulated by the population. We focus particular attention on the case when the effects of beneficial mutations are exponentially distributed, as well as on a more general class of exponential-like distributions. In both cases, we show that the rate of adaptation and the influence of genetic background on the fixation of new mutants is equivalent to an effective model with a single selection coefficient and rescaled mutation rate, and we explicitly calculate these effective parameters. We find that the effective selection coefficient exactly coincides with the most common fixed mutational effect. This equivalence leads to an intuitive picture of the relative importance of different types of interference effects, which can shift dramatically as a function of the population size, mutation rate, and the underlying distribution of fitness effects. PMID:22371564

  17. Use of Continuous Exponential Families to Link Forms via Anchor Tests. Research Report. ETS RR-11-11

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Yan, Duanli

    2011-01-01

    Continuous exponential families are applied to linking test forms via an internal anchor. This application combines work on continuous exponential families for single-group designs and work on continuous exponential families for equivalent-group designs. Results are compared to those for kernel and equipercentile equating in the case of chained…

  18. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  19. Persistence of exponential bed thickness distributions in the stratigraphic record: Experiments and theory

    NASA Astrophysics Data System (ADS)

    Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.

    2010-12-01

    Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.

  20. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  1. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    PubMed

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  2. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  3. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  4. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    NASA Astrophysics Data System (ADS)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  5. Stochastic modelling of intermittent fluctuations in the scrape-off layer: Correlations, distributions, level crossings, and moment estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.

    A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less

  6. Obstructive sleep apnea alters sleep stage transition dynamics.

    PubMed

    Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert

    2010-06-28

    Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.

  7. A demographic study of the exponential distribution applied to uneven-aged forests

    Treesearch

    Jeffrey H. Gove

    2016-01-01

    A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...

  8. Exponentiated power Lindley distribution.

    PubMed

    Ashour, Samir K; Eltehiwy, Mahmoud A

    2015-11-01

    A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.

  9. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  10. The Homotopic Probability Distribution and the Partition Function for the Entangled System Around a Ribbon Segment Chain

    NASA Astrophysics Data System (ADS)

    Qian, Shang-Wu; Gu, Zhi-Yu

    2001-12-01

    Using the Feynman's path integral with topological constraints arising from the presence of one singular line, we find the homotopic probability distribution P_L^n for the winding number n and the partition function P_L of the entangled system around a ribbon segment chain. We find that when the width of the ribbon segment chain 2a increases,the partition function exponentially decreases, whereas the free energy increases an amount, which is proportional to the square of the width. When the width tends to zero we obtain the same results as those of a single chain with one singular point.

  11. New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays.

    PubMed

    Zhang, Guodong; Zeng, Zhigang; Hu, Junhao

    2018-01-01

    This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  13. A model of canopy photosynthesis incorporating protein distribution through the canopy and its acclimation to light, temperature and CO2

    PubMed Central

    Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce

    2010-01-01

    Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273

  14. Relationship between Item Responses of Negative Affect Items and the Distribution of the Sum of the Item Scores in the General Population

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Several studies have shown that total depressive symptom scores in the general population approximate an exponential pattern, except for the lower end of the distribution. The Center for Epidemiologic Studies Depression Scale (CES-D) consists of 20 items, each of which may take on four scores: “rarely,” “some,” “occasionally,” and “most of the time.” Recently, we reported that the item responses for 16 negative affect items commonly exhibit exponential patterns, except for the level of “rarely,” leading us to hypothesize that the item responses at the level of “rarely” may be related to the non-exponential pattern typical of the lower end of the distribution. To verify this hypothesis, we investigated how the item responses contribute to the distribution of the sum of the item scores. Methods Data collected from 21,040 subjects who had completed the CES-D questionnaire as part of a Japanese national survey were analyzed. To assess the item responses of negative affect items, we used a parameter r, which denotes the ratio of “rarely” to “some” in each item response. The distributions of the sum of negative affect items in various combinations were analyzed using log-normal scales and curve fitting. Results The sum of the item scores approximated an exponential pattern regardless of the combination of items, whereas, at the lower end of the distributions, there was a clear divergence between the actual data and the predicted exponential pattern. At the lower end of the distributions, the sum of the item scores with high values of r exhibited higher scores compared to those predicted from the exponential pattern, whereas the sum of the item scores with low values of r exhibited lower scores compared to those predicted. Conclusions The distributional pattern of the sum of the item scores could be predicted from the item responses of such items. PMID:27806132

  15. A Library of Selenourea Precursors to PbSe Nanocrystals with Size Distributions near the Homogeneous Limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campos, Michael P.; Hendricks, Mark P.; Beecher, Alexander N.

    Here, we report a tunable library of N,N,N'-trisubstituted selenourea precursors and their reaction with lead oleate at 60–150 °C to form carboxylate-terminated PbSe nanocrystals in quantitative yields. Single exponential conversion kinetics can be tailored over 4 orders of magnitude by adjusting the selenourea structure. The wide range of conversion reactivity allows the extent of nucleation ([nanocrystal] = 4.6–56.7 μM) and the size following complete precursor conversion (d = 1.7–6.6 nm) to be controlled. Narrow size distributions (σ = 0.5–2%) are obtained whose spectral line widths are dominated (73–83%) by the intrinsic single particle spectral broadening, as observed using spectral holemore » burning measurements. Here, the intrinsic broadening decreases with increasing size (fwhm = 320–65 meV, d = 1.6–4.4 nm) that derives from exciton fine structure and exciton–phonon coupling rather than broadening caused by the size distribution.« less

  16. A Library of Selenourea Precursors to PbSe Nanocrystals with Size Distributions near the Homogeneous Limit

    DOE PAGES

    Campos, Michael P.; Hendricks, Mark P.; Beecher, Alexander N.; ...

    2017-01-19

    Here, we report a tunable library of N,N,N'-trisubstituted selenourea precursors and their reaction with lead oleate at 60–150 °C to form carboxylate-terminated PbSe nanocrystals in quantitative yields. Single exponential conversion kinetics can be tailored over 4 orders of magnitude by adjusting the selenourea structure. The wide range of conversion reactivity allows the extent of nucleation ([nanocrystal] = 4.6–56.7 μM) and the size following complete precursor conversion (d = 1.7–6.6 nm) to be controlled. Narrow size distributions (σ = 0.5–2%) are obtained whose spectral line widths are dominated (73–83%) by the intrinsic single particle spectral broadening, as observed using spectral holemore » burning measurements. Here, the intrinsic broadening decreases with increasing size (fwhm = 320–65 meV, d = 1.6–4.4 nm) that derives from exciton fine structure and exciton–phonon coupling rather than broadening caused by the size distribution.« less

  17. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Design and Analysis of Scheduling Policies for Real-Time Computer Systems

    DTIC Science & Technology

    1992-01-01

    C. M. Krishna, "The Impact of Workload on the Reliability of Real-Time Processor Triads," to appear in Micro . Rel. [17] J.F. Kurose, "Performance... Processor Triads", to appear in Micro . Rel. "* J.F. Kurose. "Performance Analysis of Minimum Laxity Scheduling in Discrete Time Queue- ing Systems", to...exponentially distributed service times and deadlines. A similar model was developed for the ED policy for a single processor system under identical

  19. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  20. Applicaton of the Calculating Formula for Mean Neutron Exposure on Barium stars

    NASA Astrophysics Data System (ADS)

    Zhang, F. H.; Zhang, L.; Cui, W. Y.; Zhang, B.

    2017-11-01

    Latest studies have shown that, in the s-process nucleosynthesis model for the low-mass asymptotic giant branch (AGB) star with (13C) pocket radiative burning during the interpulse period, the distribution of neutron exposures in the nucleosynthesis region can be regarded as an exponential function, and the relation between the mean neutron exposure (τ0) and the model parameters is (τ0} = - Δ τ/ln [q/(1 - r + q)]), in which (Δ τ) is the exposure value of each neutron irradiation, (r) is the overlap factor, and (q) is the mass ratio of the (13C) shell to the He intershell. In this paper the formula is applied to 26 samples of barium stars to test its reliability, and furthermore the neutron exposure nature in the AGB companion stars of 26 barium stars are analyzed. The results show that, the formula is reliable; in the AGB companion stars of 26 barium stars, at least 8 stars definitely have and 12 stars are highly likely to have exponential distribution of neutron exposures, while 4 stars tend to experience single neutron exposure; most of the AGB companion stars may have experienced fewer times of neutron irradiations before the element abundance distribution of the s-process comes to asymptotic condition.

  1. Correlation of conformational heterogeneity of the tryptophyl side chain and time-resolved fluorescence intensity decay kinetics

    NASA Astrophysics Data System (ADS)

    Laws, William R.; Ross, J. B. Alexander

    1992-04-01

    The time-resolved fluorescence properties of a tryptophan residue should be useful for probing protein structure, function, and dynamics. To date, however, the non-single exponential fluorescence intensity decay kinetics for numerous peptides and proteins having a single tryptophan residue have not been adequately explained. Many possibilities have been considered and include: (1) contributions from the 1La and 1Lb states of indole; (2) excited-state hydrogen exchange; and (3) environmental heterogeneity from (chi) 1 and (chi) 2 rotamers. In addition, it has been suggested that generally many factors contribute to the decay and a distribution of probabilities may be more appropriate. Two recent results support multiple species due to conformational heterogeneity as the major contributor to complex kinetics. First, a rotationally constrained tryptophan analogue has fluorescence intensity decay kinetics that can be described by the sum of two exponentials with amplitudes comparable to the relative populations of the two rotational isomers. Second, the multiple exponentials observed for tyrosine-containing model compounds and peptides correlate with the (chi) 1 rotamer populations independently determined by 1H NMR. We now report similar correlations between rotamer populations and fluorescence intensity decay kinetics for a tryptophan analogue of oxytocin. It appears for this compound that either (chi) 2 rotations do not appreciably alter the indole environment, (chi) 2 rotations are rapid enough to average the observed dependence, or only one of two possible (chi) 2 populations is associated with each (chi) 1 rotamer.

  2. Avalanche Analysis from Multielectrode Ensemble Recordings in Cat, Monkey, and Human Cerebral Cortex during Wakefulness and Sleep

    PubMed Central

    Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain

    2012-01-01

    Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053

  3. Self Organized Criticality as a new paradigm of sleep regulation

    NASA Astrophysics Data System (ADS)

    Ivanov, Plamen Ch.; Bartsch, Ronny P.

    2012-02-01

    Humans and animals often exhibit brief awakenings from sleep (arousals), which are traditionally viewed as random disruptions of sleep caused by external stimuli or pathologic perturbations. However, our recent findings show that arousals exhibit complex temporal organization and scale-invariant behavior, characterized by a power-law probability distribution for their durations, while sleep stage durations exhibit exponential behavior. The co-existence of both scale-invariant and exponential processes generated by a single regulatory mechanism has not been observed in physiological systems until now. Such co-existence resembles the dynamical features of non-equilibrium systems exhibiting self-organized criticality (SOC). Our empirical analysis and modeling approaches based on modern concepts from statistical physics indicate that arousals are an integral part of sleep regulation and may be necessary to maintain and regulate healthy sleep by releasing accumulated excitations in the regulatory neuronal networks, following a SOC-type temporal organization.

  4. Hyperchaotic Dynamics for Light Polarization in a Laser Diode

    NASA Astrophysics Data System (ADS)

    Bonatto, Cristian

    2018-04-01

    It is shown that a highly randomlike behavior of light polarization states in the output of a free-running laser diode, covering the whole Poincaré sphere, arises as a result from a fully deterministic nonlinear process, which is characterized by a hyperchaotic dynamics of two polarization modes nonlinearly coupled with a semiconductor medium, inside the optical cavity. A number of statistical distributions were found to describe the deterministic data of the low-dimensional nonlinear flow, such as lognormal distribution for the light intensity, Gaussian distributions for the electric field components and electron densities, Rice and Rayleigh distributions, and Weibull and negative exponential distributions, for the modulus and intensity of the orthogonal linear components of the electric field, respectively. The presented results could be relevant for the generation of single units of compact light source devices to be used in low-dimensional optical hyperchaos-based applications.

  5. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative sample of US adults

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.

    2017-01-01

    Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560

  6. Investigation of non-Gaussian effects in the Brazilian option market

    NASA Astrophysics Data System (ADS)

    Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.

    2018-04-01

    An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.

  7. Calculating Formulae of Proportion Factor and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang

    2016-07-01

    Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.

  8. Echo Statistics of Aggregations of Scatterers in a Random Waveguide: Application to Biologic Sonar Clutter

    DTIC Science & Technology

    2012-09-01

    used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,

  9. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  10. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  11. Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.

    PubMed

    Joo, Harry; Aguinis, Herman; Bradley, Kyle J

    2017-07-01

    We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  13. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  14. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  15. Correlating the stretched-exponential and super-Arrhenius behaviors in the structural relaxation of glass-forming liquids.

    PubMed

    Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg

    2011-04-20

    Following the report of a single-exponential activation behavior behind the super-Arrhenius structural relaxation of glass-forming liquids in our preceding paper, we find that the non-exponentiality in the structural relaxation of glass-forming liquids is straightforwardly determined by the relaxation time, and could be calculated from the measured relaxation data. Comparisons between the calculated and measured non-exponentialities for typical glass-forming liquids, from fragile to intermediate, convincingly support the present analysis. Hence the origin of the non-exponentiality and its correlation with liquid fragility become clearer.

  16. Characteristics of single Ca(2+) channel kinetics in feline hypertrophied ventricular myocytes.

    PubMed

    Yang, Xiangjun; Hui, Jie; Jiang, Tingbo; Song, Jianping; Liu, Zhihua; Jiang, Wenping

    2002-04-01

    To explore the mechanism underlying the prolongation of action potential and delayed inactivation of the L-type Ca(2+) (I(Ca, L)) current in a feline model of left ventricular system hypertension and concomitant hypertrophy. Single Ca(2+) channel properties in myocytes isolated from normal and pressure overloaded cat left ventricles were studied, using patch-clamp techniques. Left ventricular pressure overload was induced by partial ligation of the ascending aorta for 4 - 6 weeks. The amplitude of single Ca(2+) channel current evoked by depolarizing pulses from -40 mV to 0 mV was 1.02 +/- 0.03 pA in normal cells and 1.05 +/- 0.03 pA in hypertrophied cells, and there was no difference in single channel current-voltage relationships between the groups since slope conductance was 26.2 +/- 1.0 pS in normal and hypertrophied cells, respectively. Peak amplitudes of the ensemble-averaged single Ca(2+) channel currents were not different between the two groups of cells. However, the amplitude of this averaged current at the end of the clamp pulse was significantly larger in hypertrophied cells than in normal cells. Open-time histograms revealed that open-time distribution was fitted by a single exponential function in channels of normal cells and by a two exponential function in channels of hypertrophied cells. The number of long-lasting openings was increased in channels of hypertrophied cells, and therefore the calculated mean open time of the channel was significantly longer compared to normal controls. Kinetic changes in the Ca(2+) channel may underlie both hypertrophy-associated delayed inactivation of the Ca(2+) current and, in part, the pressure overload-induced action potential lengthening in this cat model of ventricular left systolic hypertension and hypertrophy.

  17. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  18. Self-organized Segregation on the Grid

    NASA Astrophysics Data System (ADS)

    Omidvar, Hamed; Franceschetti, Massimo

    2018-02-01

    We consider an agent-based model with exponentially distributed waiting times in which two types of agents interact locally over a graph, and based on this interaction and on the value of a common intolerance threshold τ , decide whether to change their types. This is equivalent to a zero-temperature ising model with Glauber dynamics, an asynchronous cellular automaton with extended Moore neighborhoods, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. Some rigorous results were recently obtained in the theoretical computer science literature, and this work provides several extensions. We enlarge the intolerance interval leading to the expected formation of large segregated regions of agents of a single type from the known size ɛ >0 to size ≈ 0.134. Namely, we show that for 0.433< τ < 1/2 (and by symmetry 1/2<τ <0.567), the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood. We further extend the interval leading to expected large segregated regions to size ≈ 0.312 considering "almost segregated" regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type vanishes quickly as the size of the neighborhood grows. In this case, we show that for 0.344 < τ ≤ 0.433 (and by symmetry for 0.567 ≤ τ <0.656) the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood. This behavior is reminiscent of supercritical percolation, where small clusters of empty sites can be observed within any sufficiently large region of the occupied percolation cluster. The exponential bounds that we provide also imply that complete segregation, where agents of a single type cover the whole grid, does not occur with high probability for p=1/2 and the range of intolerance considered.

  19. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  20. The shock waves in decaying supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Smith, M. D.; Mac Low, M.-M.; Zuev, J. M.

    2000-04-01

    We here analyse numerical simulations of supersonic, hypersonic and magnetohydrodynamic turbulence that is free to decay. Our goals are to understand the dynamics of the decay and the characteristic properties of the shock waves produced. This will be useful for interpretation of observations of both motions in molecular clouds and sources of non-thermal radiation. We find that decaying hypersonic turbulence possesses an exponential tail of fast shocks and an exponential decay in time, i.e. the number of shocks is proportional to t exp (-ktv) for shock velocity jump v and mean initial wavenumber k. In contrast to the velocity gradients, the velocity Probability Distribution Function remains Gaussian with a more complex decay law. The energy is dissipated not by fast shocks but by a large number of low Mach number shocks. The power loss peaks near a low-speed turn-over in an exponential distribution. An analytical extension of the mapping closure technique is able to predict the basic decay features. Our analytic description of the distribution of shock strengths should prove useful for direct modeling of observable emission. We note that an exponential distribution of shocks such as we find will, in general, generate very low excitation shock signatures.

  1. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  2. Universal patterns of inequality

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Yakovenko, Victor M.

    2010-07-01

    Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.

  3. Study on probability distributions for evolution in modified extremal optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian

    2010-05-01

    It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.

  4. Probing the stochastic property of endoreduplication in cell size determination of Arabidopsis thaliana leaf epidermal tissue

    PubMed Central

    2017-01-01

    Cell size distribution is highly reproducible, whereas the size of individual cells often varies greatly within a tissue. This is obvious in a population of Arabidopsis thaliana leaf epidermal cells, which ranged from 1,000 to 10,000 μm2 in size. Endoreduplication is a specialized cell cycle in which nuclear genome size (ploidy) is doubled in the absence of cell division. Although epidermal cells require endoreduplication to enhance cellular expansion, the issue of whether this mechanism is sufficient for explaining cell size distribution remains unclear due to a lack of quantitative understanding linking the occurrence of endoreduplication with cell size diversity. Here, we addressed this question by quantitatively summarizing ploidy profile and cell size distribution using a simple theoretical framework. We first found that endoreduplication dynamics is a Poisson process through cellular maturation. This finding allowed us to construct a mathematical model to predict the time evolution of a ploidy profile with a single rate constant for endoreduplication occurrence in a given time. We reproduced experimentally measured ploidy profile in both wild-type leaf tissue and endoreduplication-related mutants with this analytical solution, further demonstrating the probabilistic property of endoreduplication. We next extended the mathematical model by incorporating the element that cell size is determined according to ploidy level to examine cell size distribution. This analysis revealed that cell size is exponentially enlarged 1.5 times every endoreduplication round. Because this theoretical simulation successfully recapitulated experimentally observed cell size distributions, we concluded that Poissonian endoreduplication dynamics and exponential size-boosting are the sources of the broad cell size distribution in epidermal tissue. More generally, this study contributes to a quantitative understanding whereby stochastic dynamics generate steady-state biological heterogeneity. PMID:28926847

  5. A fractal process of hydrogen diffusion in a-Si:H with exponential energy distribution

    NASA Astrophysics Data System (ADS)

    Hikita, Harumi; Ishikawa, Hirohisa; Morigaki, Kazuo

    2017-04-01

    Hydrogen diffusion in a-Si:H with exponential distribution of the states in energy exhibits the fractal structure. It is shown that a probability P(t) of the pausing time t has a form of tα (α: fractal dimension). It is shown that the fractal dimension α = Tr/T0 (Tr: hydrogen temperature, T0: a temperature corresponding to the width of exponential distribution of the states in energy) is in agreement with the Hausdorff dimension. A fractal graph for the case of α ≤ 1 is like the Cantor set. A fractal graph for the case of α > 1 is like the Koch curves. At α = ∞, hydrogen migration exhibits Brownian motion. Hydrogen diffusion in a-Si:H should be the fractal process.

  6. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  7. Single- and multiple-pulse noncoherent detection statistics associated with partially developed speckle.

    PubMed

    Osche, G R

    2000-08-20

    Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.

  8. Weblog patterns and human dynamics with decreasing interest

    NASA Astrophysics Data System (ADS)

    Guo, J.-L.; Fan, C.; Guo, Z.-H.

    2011-06-01

    In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.

  9. An Oil-Stream Photomicrographic Aeroscope for Obtaining Cloud Liquid-Water Content and Droplet Size Distributions in Flight

    NASA Technical Reports Server (NTRS)

    Hacker, Paul T.

    1956-01-01

    An airborne cloud aeroscope by which droplet size, size distribution, and liquid-water content of clouds can be determined has been developed and tested in flight and in wind tunnels with water sprays. In this aeroscope the cloud droplets are continuously captured in a stream of oil, which Is then photographed by a photomicrographic camera. The droplet size and size distribution can be determined directly from the photographs. With the droplet size distribution known, the liquid-water content of the cloud can be computed from the geometry of the aeroscope, the airspeed, and the oil-flow rate. The aeroscope has the following features: Data are obtained semi-automatically, and permanent data are taken in the form of photographs. A single picture usually contains a sufficient number of droplets to establish the droplet size distribution. Cloud droplets are continuously captured in the stream of oil, but pictures are taken at Intervals. The aeroscope can be operated in icing and non-icing conditions. Because of mixing of oil in the instrument, the droplet-distribution patterns and liquid-water content values from a single picture are exponentially weighted average values over a path length of about 3/4 mile at 150 miles per hour. The liquid-water contents, volume-median diameters, and distribution patterns obtained on test flights and in the Lewis icing tunnel are similar to previously published data.

  10. Three-Dimensional Flow of Nanofluid Induced by an Exponentially Stretching Sheet: An Application to Solar Energy

    PubMed Central

    Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.

    2015-01-01

    This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857

  11. Velocity distributions of granular gases with drag and with long-range interactions.

    PubMed

    Kohlstedt, K; Snezhko, A; Sapozhnikov, M V; Aranson, I S; Olafsen, J S; Ben-Naim, E

    2005-08-05

    We study velocity statistics of electrostatically driven granular gases. For two different experiments, (i) nonmagnetic particles in a viscous fluid and (ii) magnetic particles in air, the velocity distribution is non-Maxwellian, and its high-energy tail is exponential, P(upsilon) approximately exp(-/upsilon/). This behavior is consistent with the kinetic theory of driven dissipative particles. For particles immersed in a fluid, viscous damping is responsible for the exponential tail, while for magnetic particles, long-range interactions cause the exponential tail. We conclude that velocity statistics of dissipative gases are sensitive to the fluid environment and to the form of the particle interaction.

  12. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  13. Single-arm phase II trial design under parametric cure models.

    PubMed

    Wu, Jianrong

    2015-01-01

    The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  15. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Intermittent fluctuations in the Alcator C-Mod scrape-off layer for ohmic and high confinement mode plasmas

    NASA Astrophysics Data System (ADS)

    Garcia, O. E.; Kube, R.; Theodorsen, A.; LaBombard, B.; Terry, J. L.

    2018-05-01

    Plasma fluctuations in the scrape-off layer of the Alcator C-Mod tokamak in ohmic and high confinement modes have been analyzed using gas puff imaging data. In all cases investigated, the time series of emission from a single spatially resolved view into the gas puff are dominated by large-amplitude bursts, attributed to blob-like filament structures moving radially outwards and poloidally. There is a remarkable similarity of the fluctuation statistics in ohmic plasmas and in edge localized mode-free and enhanced D-alpha high confinement mode plasmas. Conditionally averaged waveforms have a two-sided exponential shape with comparable temporal scales and asymmetry, while the burst amplitudes and the waiting times between them are exponentially distributed. The probability density functions and the frequency power spectral densities are similar for all these confinement modes. These results provide strong evidence in support of a stochastic model describing the plasma fluctuations in the scrape-off layer as a super-position of uncorrelated exponential pulses. Predictions of this model are in excellent agreement with experimental measurements in both ohmic and high confinement mode plasmas. The stochastic model thus provides a valuable tool for predicting fluctuation-induced plasma-wall interactions in magnetically confined fusion plasmas.

  17. Stellar Surface Brightness Profiles of Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Herrmann, K. A.

    2014-03-01

    Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, or the light falls off with one exponential out to a break radius and then falls off (II) more steeply (“truncated”), or (III) less steeply (“anti-truncated”). Why there are three different radial profile types is still a mystery, including why light falls off as an exponential at all. Profile breaks are also found in dwarf disks, but some dwarf Type IIs are flat or increasing (FI) out to a break before falling off. I have been re-examining the multi-wavelength stellar disk profiles of 141 dwarf galaxies, primarily from Hunter & Elmegreen (2004, 2006). Each dwarf has data in up to 11 wavelength bands: FUV and NUV from GALEX, UBVJHK and Hα from ground-based observations, and 3.6 and 4.5μm from Spitzer. Here I highlight some results from a semi-automatic fitting of this data set including: (1) statistics of break locations and other properties as a function of wavelength and profile type, (2) color trends and radial mass distribution as a function of profile type, and (3) the relationship of the break radius to the kinematics and density profiles of atomic hydrogen gas in the 40 dwarfs of the LITTLE THINGS subsample.

  18. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  19. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  20. Theory of Thermal Relaxation of Electrons in Semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadasivam, Sridhar; Chan, Maria K. Y.; Darancet, Pierre

    2017-09-01

    We compute the transient dynamics of phonons in contact with high energy ``hot'' charge carriers in 12 polar and non-polar semiconductors, using a first-principles Boltzmann transport framework. For most materials, we find that the decay in electronic temperature departs significantly from a single-exponential model at times ranging from 1 ps to 15 ps after electronic excitation, a phenomenon concomitant with the appearance of non-thermal vibrational modes. We demonstrate that these effects result from the slow thermalization within the phonon subsystem, caused by the large heterogeneity in the timescales of electron-phonon and phonon-phonon interactions in these materials. We propose a generalizedmore » 2-temperature model accounting for the phonon thermalization as a limiting step of electron-phonon thermalization, which captures the full thermal relaxation of hot electrons and holes in semiconductors. A direct consequence of our findings is that, for semiconductors, information about the spectral distribution of electron-phonon and phonon-phonon coupling can be extracted from the multi-exponential behavior of the electronic temperature.« less

  1. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  2. Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.

    2014-11-01

    We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.

  3. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  4. Exponential Boundary Observers for Pressurized Water Pipe

    NASA Astrophysics Data System (ADS)

    Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel

    2015-11-01

    This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.

  5. Ring-Shaped Microlanes and Chemical Barriers as a Platform for Probing Single-Cell Migration.

    PubMed

    Schreiber, Christoph; Segerer, Felix J; Wagner, Ernst; Roidl, Andreas; Rädler, Joachim O

    2016-05-31

    Quantification and discrimination of pharmaceutical and disease-related effects on cell migration requires detailed characterization of single-cell motility. In this context, micropatterned substrates that constrain cells within defined geometries facilitate quantitative readout of locomotion. Here, we study quasi-one-dimensional cell migration in ring-shaped microlanes. We observe bimodal behavior in form of alternating states of directional migration (run state) and reorientation (rest state). Both states show exponential lifetime distributions with characteristic persistence times, which, together with the cell velocity in the run state, provide a set of parameters that succinctly describe cell motion. By introducing PEGylated barriers of different widths into the lane, we extend this description by quantifying the effects of abrupt changes in substrate chemistry on migrating cells. The transit probability decreases exponentially as a function of barrier width, thus specifying a characteristic penetration depth of the leading lamellipodia. Applying this fingerprint-like characterization of cell motion, we compare different cell lines, and demonstrate that the cancer drug candidate salinomycin affects transit probability and resting time, but not run time or run velocity. Hence, the presented assay allows to assess multiple migration-related parameters, permits detailed characterization of cell motility, and has potential applications in cell biology and advanced drug screening.

  6. S-process studies using single and pulsed neutron exposures

    NASA Astrophysics Data System (ADS)

    Beer, H.

    The formation of heavy elements by slow neutron capture (s-process) is investigated. A pulsed neutron irradiation leading to an exponential exposure distribution is dominant for nuclei from A = 90 to 200. For the isotopes from iron to zirconium an additional 'weak' s-process component must be superimposed. Calculations using a single or another pulsed neutron exposure for this component have been carried out in order to reproduce the abundance pattern of the s-only and s-process dominant isotopes. For the adjustment of these calculations to the empirical values, the inclusion of new capture cross section data on Se76 and Y89 and the consideration of the branchings at Ni63, Se79, and Kr85 was important. The combination of an s-process with a single and a pulsed neutron exposure yielded a better representation of empirical abundances than a two component pulsed s-process.

  7. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  8. Kinetics of force recovery following length changes in active skinned single fibres from rabbit psoas muscle

    PubMed Central

    Burton, Kevin; Simmons, Robert M; Sleep, John; Smith, David A

    2006-01-01

    Redevelopment of isometric force following shortening of skeletal muscle is thought to result from a redistribution of cross-bridge states. We varied the initial force and cross-bridge distribution by applying various length-change protocols to active skinned single fibres from rabbit psoas muscle, and observed the effect on the slowest phase of recovery (‘late recovery’) that follows transient changes. In response to step releases that reduced force to near zero (∼8 nm (half sarcomere)−1) or prolonged shortening at high velocity, late recovery was well described by two exponentials of approximately equal amplitude and rate constants of ∼2 s−1 and ∼9 s−1 at 5°C. When a large restretch was applied at the end of rapid shortening, recovery was accelerated by (1) the introduction of a slow falling component that truncated the rise in force, and (2) a relative increase in the contribution of the fast exponential component. The rate of the slow fall was similar to that observed after a small isometric step stretch, with a rate of 0.4–0.8 s−1, and its effects could be reversed by reducing force to near zero immediately after the stretch. Force at the start of late recovery was varied in a series of shortening steps or ramps in order to probe the effect of cross-bridge strain on force redevelopment. The rate constants of the two components fell by 40–50% as initial force was raised to 75–80% of steady isometric force. As initial force increased, the relative contribution of the fast component decreased, and this was associated with a length constant of about 2 nm. The results are consistent with a two-state strain-dependent cross-bridge model. In the model there is a continuous distribution of recovery rate constants, but two-exponential fits show that the fast component results from cross-bridges initially at moderate positive strain and the slow component from cross-bridges at high positive strain. PMID:16497718

  9. Phenomenology of stochastic exponential growth

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  10. Cole-Davidson dynamics of simple chain models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian

    2008-10-01

    Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less

  11. Lithium ion dynamics in Li2S+GeS2+GeO2 glasses studied using (7)Li NMR field-cycling relaxometry and line-shape analysis.

    PubMed

    Gabriel, Jan; Petrov, Oleg V; Kim, Youngsik; Martin, Steve W; Vogel, Michael

    2015-09-01

    We use (7)Li NMR to study the ionic jump motion in ternary 0.5Li2S+0.5[(1-x)GeS2+xGeO2] glassy lithium ion conductors. Exploring the "mixed glass former effect" in this system led to the assumption of a homogeneous and random variation of diffusion barriers in this system. We exploit that combining traditional line-shape analysis with novel field-cycling relaxometry, it is possible to measure the spectral density of the ionic jump motion in broad frequency and temperature ranges and, thus, to determine the distribution of activation energies. Two models are employed to parameterize the (7)Li NMR data, namely, the multi-exponential autocorrelation function model and the power-law waiting times model. Careful evaluation of both of these models indicates a broadly inhomogeneous energy landscape for both the single (x=0.0) and the mixed (x=0.1) network former glasses. The multi-exponential autocorrelation function model can be well described by a Gaussian distribution of activation barriers. Applicability of the methods used and their sensitivity to microscopic details of ionic motion are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. 1/f oscillations in a model of moth populations oriented by diffusive pheromones

    NASA Astrophysics Data System (ADS)

    Barbosa, L. A.; Martins, M. L.; Lima, E. R.

    2005-01-01

    An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.

  13. Stochastic scheduling on a repairable manufacturing system

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cao, Jinhua

    1995-08-01

    In this paper, we consider some stochastic scheduling problems with a set of stochastic jobs on a manufacturing system with a single machine that is subject to multiple breakdowns and repairs. When the machine processing a job fails, the job processing must restart some time later when the machine is repaired. For this typical manufacturing system, we find the optimal policies that minimize the following objective functions: (1) the weighed sum of the completion times; (2) the weighed number of late jobs having constant due dates; (3) the weighted number of late jobs having random due dates exponentially distributed, which generalize some previous results.

  14. New pharmacokinetic methods. III. Two simple test for deep pool effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Browne, T.R.; Greenblatt, D.J.; Schumacher, G.E.

    1990-08-01

    If a portion of administered drug is distributed into a deep peripheral compartment, the drug's actual elimination half-life during the terminal exponential phase of elimination may be longer than determined by a single dose study or a tracer dose study (deep pool effect). Two simple methods of testing for deep pool effect applicable to drugs with either linear or nonlinear pharmacokinetic properties are described. The methods are illustrated with stable isotope labeled (13C15N2) tracer dose studies of phenytoin. No significant (P less than .05) deep pool effect was detected.

  15. A study on some urban bus transport networks

    NASA Astrophysics Data System (ADS)

    Chen, Yong-Zhou; Li, Nan; He, Da-Ren

    2007-03-01

    In this paper, we present the empirical investigation results on the urban bus transport networks (BTNs) of four major cities in China. In BTN, nodes are bus stops. Two nodes are connected by an edge when the stops are serviced by a common bus route. The empirical results show that the degree distributions of BTNs take exponential function forms. Other two statistical properties of BTNs are also considered, and they are suggested as the distributions of so-called “the number of stops in a bus route” (represented by S) and “the number of bus routes a stop joins” (by R). The distributions of R also show exponential function forms, while the distributions of S follow asymmetric, unimodal functions. To explain these empirical results and attempt to simulate a possible evolution process of BTN, we introduce a model by which the analytic and numerical result obtained agrees well with the empirical facts. Finally, we also discuss some other possible evolution cases, where the degree distribution shows a power law or an interpolation between the power law and the exponential decay.

  16. Creep substructure formation in sodium chloride single crystals in the power law and exponential creep regimes

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Pharr, G. M.

    1989-01-01

    Creep tests conducted on NaCl single crystals in the temperature range from 373 to 1023 K show that true steady state creep is obtained only above 873 K when the ratio of the applied stress to the shear modulus is less than or equal to 0.0001. Under other stress and temperature conditions, corresponding to both power law and exponential creep, the creep rate decreases monotonically with increasing strain. The transition from power law to exponential creep is shown to be associated with increases in the dislocation density, the cell boundary width, and the aspect ratio of the subgrains along the primary slip planes. The relation between dislocation structure and creep behavior is also assessed.

  17. A model of non-Gaussian diffusion in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Lanoiselée, Yann; Grebenkov, Denis S.

    2018-04-01

    Recent progress in single-particle tracking has shown evidence of the non-Gaussian distribution of displacements in living cells, both near the cellular membrane and inside the cytoskeleton. Similar behavior has also been observed in granular materials, turbulent flows, gels and colloidal suspensions, suggesting that this is a general feature of diffusion in complex media. A possible interpretation of this phenomenon is that a tracer explores a medium with spatio-temporal fluctuations which result in local changes of diffusivity. We propose and investigate an ergodic, easily interpretable model, which implements the concept of diffusing diffusivity. Depending on the parameters, the distribution of displacements can be either flat or peaked at small displacements with an exponential tail at large displacements. We show that the distribution converges slowly to a Gaussian one. We calculate statistical properties, derive the asymptotic behavior and discuss some implications and extensions.

  18. Resistance distribution in the hopping percolation model.

    PubMed

    Strelniker, Yakov M; Havlin, Shlomo; Berkovits, Richard; Frydman, Aviad

    2005-07-01

    We study the distribution function P (rho) of the effective resistance rho in two- and three-dimensional random resistor networks of linear size L in the hopping percolation model. In this model each bond has a conductivity taken from an exponential form sigma proportional to exp (-kappar) , where kappa is a measure of disorder and r is a random number, 0< or = r < or =1 . We find that in both the usual strong-disorder regime L/ kappa(nu) >1 (not sensitive to removal of any single bond) and the extreme-disorder regime L/ kappa(nu) <1 (very sensitive to such a removal) the distribution depends only on L/kappa(nu) and can be well approximated by a log-normal function with dispersion b kappa(nu) /L , where b is a coefficient which depends on the type of lattice, and nu is the correlation critical exponent.

  19. Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes.

    PubMed

    Uhl, Jonathan T; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A W; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R; Liaw, P K; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A

    2015-11-17

    Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.

  20. Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes

    PubMed Central

    Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A. W.; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J.; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R.; Liaw, P. K.; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A.

    2015-01-01

    Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes. PMID:26572103

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel

    Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less

  2. Proceedings of the Third International Workshop on Multistrategy Learning, May 23-25 Harpers Ferry, WV.

    DTIC Science & Technology

    1996-09-16

    approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in

  3. Efficient full decay inversion of MRS data with a stretched-exponential approximation of the ? distribution

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.

    2012-08-01

    We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.

  4. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  5. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  6. Giant current fluctuations in an overheated single-electron transistor

    NASA Astrophysics Data System (ADS)

    Laakso, M. A.; Heikkilä, T. T.; Nazarov, Yuli V.

    2010-11-01

    Interplay of cotunneling and single-electron tunneling in a thermally isolated single-electron transistor leads to peculiar overheating effects. In particular, there is an interesting crossover interval where the competition between cotunneling and single-electron tunneling changes to the dominance of the latter. In this interval, the current exhibits anomalous sensitivity to the effective electron temperature of the transistor island and its fluctuations. We present a detailed study of the current and temperature fluctuations at this interesting point. The methods implemented allow for a complete characterization of the distribution of the fluctuating quantities, well beyond the Gaussian approximation. We reveal and explore the parameter range where, for sufficiently small transistor islands, the current fluctuations become gigantic. In this regime, the optimal value of the current, its expectation value, and its standard deviation differ from each other by parametrically large factors. This situation is unique for transport in nanostructures and for electron transport in general. The origin of this spectacular effect is the exponential sensitivity of the current to the fluctuating effective temperature.

  7. How Many Conformations Need To Be Sampled To Obtain Converged QM/MM Energies? The Curse of Exponential Averaging.

    PubMed

    Ryde, Ulf

    2017-11-14

    Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.

  8. Analysis of the Chinese air route network as a complex network

    NASA Astrophysics Data System (ADS)

    Cai, Kai-Quan; Zhang, Jun; Du, Wen-Bo; Cao, Xian-Bin

    2012-02-01

    The air route network, which supports all the flight activities of the civil aviation, is the most fundamental infrastructure of air traffic management system. In this paper, we study the Chinese air route network (CARN) within the framework of complex networks. We find that CARN is a geographical network possessing exponential degree distribution, low clustering coefficient, large shortest path length and exponential spatial distance distribution that is obviously different from that of the Chinese airport network (CAN). Besides, via investigating the flight data from 2002 to 2010, we demonstrate that the topology structure of CARN is homogeneous, howbeit the distribution of flight flow on CARN is rather heterogeneous. In addition, the traffic on CARN keeps growing in an exponential form and the increasing speed of west China is remarkably larger than that of east China. Our work will be helpful to better understand Chinese air traffic systems.

  9. Colloquium: Statistical mechanics of money, wealth, and income

    NASA Astrophysics Data System (ADS)

    Yakovenko, Victor M.; Rosser, J. Barkley, Jr.

    2009-10-01

    This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.

  10. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  11. Single-exponential activation behavior behind the super-Arrhenius relaxations in glass-forming liquids.

    PubMed

    Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg

    2010-11-17

    The reported relaxation time for several typical glass-forming liquids was analyzed by using a kinetic model for liquids which invoked a new kind of atomic cooperativity--thermodynamic cooperativity. The broadly studied 'cooperative length' was recognized as the kinetic cooperativity. Both cooperativities were conveniently quantified from the measured relaxation data. A single-exponential activation behavior was uncovered behind the super-Arrhenius relaxations for the liquids investigated. Hence the mesostructure of these liquids and the atomic mechanism of the glass transition became clearer.

  12. A mechanical model of bacteriophage DNA ejection

    NASA Astrophysics Data System (ADS)

    Arun, Rahul; Ghosal, Sandip

    2017-08-01

    Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.

  13. Temporal complexity in emission from Anderson localized lasers

    NASA Astrophysics Data System (ADS)

    Kumar, Randhir; Balasubrahmaniyam, M.; Alee, K. Shadak; Mujumdar, Sushil

    2017-12-01

    Anderson localization lasers exploit resonant cavities formed due to structural disorder. The inherent randomness in the structure of these cavities realizes a probability distribution in all cavity parameters such as quality factors, mode volumes, mode structures, and so on, implying resultant statistical fluctuations in the temporal behavior. Here we provide direct experimental measurements of temporal width distributions of Anderson localization lasing pulses in intrinsically and extrinsically disordered coupled-microresonator arrays. We first illustrate signature exponential decays in the spatial intensity distributions of the lasing modes that quantify their localized character, and then measure the temporal width distributions of the pulsed emission over several configurations. We observe a dependence of temporal widths on the disorder strength, wherein the widths show a single-peaked, left-skewed distribution in extrinsic disorder and a dual-peaked distribution in intrinsic disorder. We propose a model based on coupled rate equations for an emitter and an Anderson cavity with a random mode structure, which gives excellent quantitative and qualitative agreement with the experimental observations. The experimental and theoretical analyses bring to the fore the temporal complexity in Anderson-localization-based lasing systems.

  14. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  15. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  16. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  17. The dynamics of photoinduced defect creation in amorphous chalcogenides: The origin of the stretched exponential function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, R. J.; Shimakawa, K.; Department of Electrical and Electronic Engineering, Gifu University, Gifu 501-1193

    The article discusses the dynamics of photoinduced defect creations (PDC) in amorphous chalcogenides, which is described by the stretched exponential function (SEF), while the well known photodarkening (PD) and photoinduced volume expansion (PVE) are governed only by the exponential function. It is shown that the exponential distribution of the thermal activation barrier produces the SEF in PDC, suggesting that thermal energy, as well as photon energy, is incorporated in PDC mechanisms. The differences in dynamics among three major photoinduced effects (PD, PVE, and PDC) in amorphous chalcogenides are now well understood.

  18. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  19. Income inequality in Romania: The exponential-Pareto distribution

    NASA Astrophysics Data System (ADS)

    Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan

    2017-03-01

    We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.

  20. On the q-type distributions

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2007-04-01

    Various q-type distributions have appeared in the physics literature in the recent years, see e.g. L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57. It is pointed out in the paper that many of these are the same as or particular cases of what has been known in the statistics literature. Several of these statistical distributions are discussed and references provided. We feel that this paper could be of assistance for modeling problems of the type considered by L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57 and others.

  1. Role of the locus coeruleus in the emergence of power law wake bouts in a model of the brainstem sleep-wake system through early infancy.

    PubMed

    Patel, Mainak; Rangan, Aaditya

    2017-08-07

    Infant rats randomly cycle between the sleeping and waking states, which are tightly correlated with the activity of mutually inhibitory brainstem sleep and wake populations. Bouts of sleep and wakefulness are random; from P2-P10, sleep and wake bout lengths are exponentially distributed with increasing means, while during P10-P21, the sleep bout distribution remains exponential while the distribution of wake bouts gradually transforms to power law. The locus coeruleus (LC), via an undeciphered interaction with sleep and wake populations, has been shown experimentally to be responsible for the exponential to power law transition. Concurrently during P10-P21, the LC undergoes striking physiological changes - the LC exhibits strong global 0.3 Hz oscillations up to P10, but the oscillation frequency gradually rises and synchrony diminishes from P10-P21, with oscillations and synchrony vanishing at P21 and beyond. In this work, we construct a biologically plausible Wilson Cowan-style model consisting of the LC along with sleep and wake populations. We show that external noise and strong reciprocal inhibition can lead to switching between sleep and wake populations and exponentially distributed sleep and wake bout durations as during P2-P10, with the parameters of inhibition between the sleep and wake populations controlling mean bout lengths. Furthermore, we show that the changing physiology of the LC from P10-P21, coupled with reciprocal excitation between the LC and wake population, can explain the shift from exponential to power law of the wake bout distribution. To our knowledge, this is the first study that proposes a plausible biological mechanism, which incorporates the known changing physiology of the LC, for tying the developing sleep-wake circuit and its interaction with the LC to the transformation of sleep and wake bout dynamics from P2-P21. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Stellar Surface Brightness Profiles of Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Herrmann, Kimberly A.; LITTLE THINGS Team

    2012-01-01

    Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, (II) truncated: the light falls off with one exponential out to a break radius and then falls off more steeply, and (III) anti-truncated: the light falls off with one exponential out to a break radius and then falls off less steeply. Stellar surface brightness profile breaks are also found in dwarf disk galaxies, but with an additional category: (FI) flat-inside: the light is roughly constant or increasing and then falls off beyond a break. We have been re-examining the multi-wavelength stellar disk profiles of 141 dwarf galaxies, primarily from Hunter & Elmegreen (2006, 2004). Each dwarf has data in up to 11 wavelength bands: FUV and NUV from GALEX, UBVJHK and H-alpha from ground-based observations, and 3.6 and 4.5 microns from Spitzer. In this talk, I will highlight results from a semi-automatic fitting of this data set, including: (1) statistics of break locations and other properties as a function of wavelength and profile type, (2) color trends and radial mass distribution as a function of profile type, and (3) the relationship of the break radius to the kinematics and density profiles of atomic hydrogen gas in the 41 dwarfs of the LITTLE THINGS subsample. We gratefully acknowledge funding for this research from the National Science Foundation (AST-0707563).

  3. Vibronic relaxation dynamics of o-dichlorobenzene in its lowest excited singlet state

    NASA Astrophysics Data System (ADS)

    Liu, Benkang; Zhao, Haiyan; Lin, Xiang; Li, Xinxin; Gao, Mengmeng; Wang, Li; Wang, Wei

    2018-01-01

    Vibronic dynamics of o-dichlorobenzene in its lowest excited singlet state, S1, is investigated in real time by using femtosecond pump-probe method, combined with time-of-flight mass spectroscopy and photoelectron velocity mapping technique. Relaxation processes for the excitation in the range of 276-252 nm can be fitted by single exponential decay model, while in the case of wavelength shorter than 252 nm two-exponential decay model must be adopted for simulating transient profiles. Lifetime constants of the vibrationally excited S1 states change from 651 ± 10 ps for 276 nm excitation to 61 ± 1 ps for 242 nm excitation. Both the internal conversion from the S1 to the highly vibrationally excited ground state S0 and the intersystem crossing from the S1 to the triplet state are supposed to play important roles in de-excitation processes. Exponential fitting of the de-excitation rates on the excitation energy implies such de-excitation process starts from the highly vibrationally excited S0 state, which is validated, by probing the relaxation following photoexcitation at 281 nm, below the S1 origin. Time-dependent photoelectron kinetic energy distributions have been obtained experimentally. As the excitation wavelength changes from 276 nm to 242 nm, different cationic vibronic vibrations can be populated, determined by the Franck-Condon factors between the large geometry distorted excited singlet states and final cationic states.

  4. Wildfires in Siberian Mountain Forest

    NASA Astrophysics Data System (ADS)

    Kharuk, V.; Ponomarev, E. I.; Antamoshkina, O.

    2017-12-01

    The annual burned area in Russia was estimated as 0.55 to 20 Mha with >70% occurred in Siberia. We analyzed Siberian wildfires distribution with respect to elevation, slope steepness and exposure. In addition, wildfires temporal dynamic and latitudinal range were analyzed. We used daily thermal anomalies derived from NOAA/AVHRR and Terra/MODIS satellites (1990-2016). Fire return intervals were (FRI) calculated based on the dendrochronology analysis of samples taken from trees with burn marks. Spatial distribution of wildfires dependent on topo features: relative burned area increase with elevation increase (ca. 1100 m), switching to following decrease. The wildfires frequency exponentially decreased within lowlands - highlands transition. Burned area is increasing with slope steepness increase (up to 5-10°). Fire return intervals (FRI) on the southfacing slopes are about 30% longer than on the north facing. Wildfire re-occurrence is decreasing exponentially: 90% of burns were caused by single fires, 8.5% by double fires, 1% burned three times, and on about 0.05% territory wildfires occurred four times (observed period: 75 yr.). Wildfires area and number, as well as FRI, also dependent on latitude: relative burned area increasing exponentially in norward direction, whereas relative fire number is exponentially decreasing. FRI increases in the northward direction: from 80 years at 62°N to 200 years at the Arctic Circle, and to 300 years at the northern limit of closed forests ( 71+°N). Fire frequency, fire danger period and FRI are strongly correlated with incoming solar radiation (r = 0.81 - 0.95). In 21-s century, a positive trend of wildfires number and area observed in mountain areas in all Siberia. Thus, burned area and number of fires in Siberia are significantly increased since 1990th (R2 =0.47, R2 =0.69, respectively), and that increase correlated with air temperatures and climate aridity increases. However, wildfires are essential for supporting fire-resistant species (e.g., Larix sibirica, L, dahurica and Pinus silvestris) reforestation and completion with non-fire-resistant species. This work was supported by the Russian Foundation for Basic Research, the Government of the Krasnoyarsk krai, the Krasnoyarsk Fund for Support of Scientific and Technological Activities (N 17-41-240475)

  5. Statistical mechanics of money and income

    NASA Astrophysics Data System (ADS)

    Dragulescu, Adrian; Yakovenko, Victor

    2001-03-01

    Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.

  6. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  7. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  8. Human mobility in space from three modes of public transportation

    NASA Astrophysics Data System (ADS)

    Jiang, Shixiong; Guan, Wei; Zhang, Wenyi; Chen, Xu; Yang, Liu

    2017-10-01

    The human mobility patterns have drew much attention from researchers for decades, considering about its importance for urban planning and traffic management. In this study, the taxi GPS trajectories, smart card transaction data of subway and bus from Beijing are utilized to model human mobility in space. The original datasets are cleaned and processed to attain the displacement of each trip according to the origin and destination locations. Then, the Akaike information criterion is adopted to screen out the best fitting distribution for each mode from candidate ones. The results indicate that displacements of taxi trips follow the exponential distribution. Besides, the exponential distribution also fits displacements of bus trips well. However, their exponents are significantly different. Displacements of subway trips show great specialties and can be well fitted by the gamma distribution. It is obvious that human mobility of each mode is different. To explore the overall human mobility, the three datasets are mixed up to form a fusion dataset according to the annual ridership proportions. Finally, the fusion displacements follow the power-law distribution with an exponential cutoff. It is innovative to combine different transportation modes to model human mobility in the city.

  9. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  10. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.

  11. Periodic bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  12. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  13. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  14. A mathematical model for evolution and SETI.

    PubMed

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  15. Modelling Evolution and SETI Mathematically

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2012-05-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factor increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions constrained between the time axis and the exponential growth curve. Finally, since each lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  16. A Mathematical Model for Evolution and SETI

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  17. Accounting for inherent variability of growth in microbial risk assessment.

    PubMed

    Marks, H M; Coleman, M E

    2005-04-15

    Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.

  18. Fisher's method of combining dependent statistics using generalizations of the gamma distribution with applications to genetic pleiotropic associations.

    PubMed

    Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang

    2014-04-01

    A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.

  19. Graphical analysis for gel morphology II. New mathematical approach for stretched exponential function with β>1

    NASA Astrophysics Data System (ADS)

    Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu

    2005-10-01

    A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.

  20. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  1. Gravitational Effects on Closed-Cellular-Foam Microstructure

    NASA Technical Reports Server (NTRS)

    Noever, David A.; Cronise, Raymond J.; Wessling, Francis C.; McMannus, Samuel P.; Mathews, John; Patel, Darayas

    1996-01-01

    Polyurethane foam has been produced in low gravity for the first time. The cause and distribution of different void or pore sizes are elucidated from direct comparison of unit-gravity and low-gravity samples. Low gravity is found to increase the pore roundness by 17% and reduce the void size by 50%. The standard deviation for pores becomes narrower (a more homogeneous foam is produced) in low gravity. Both a Gaussian and a Weibull model fail to describe the statistical distribution of void areas, and hence the governing dynamics do not combine small voids in either a uniform or a dependent fashion to make larger voids. Instead, the void areas follow an exponential law, which effectively randomizes the production of void sizes in a nondependent fashion consistent more with single nucleation than with multiple or combining events.

  2. Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    1986-01-01

    Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)

  3. Spectral Modeling of the EGRET 3EG Gamma Ray Sources Near the Galactic Plane

    NASA Technical Reports Server (NTRS)

    Bertsch, D. L.; Hartman, R. C.; Hunter, S. D.; Thompson, D. J.; Lin, Y. C.; Kniffen, D. A.; Kanbach, G.; Mayer-Hasselwander, H. A.; Reimer, O.; Sreekumar, P.

    1999-01-01

    The third EGRET catalog lists 84 sources within 10 deg of the Galactic Plane. Five of these are well-known spin-powered pulsars, 2 and possibly 3 others are blazars, and the remaining 74 are classified as unidentified, although 6 of these are likely to be artifacts of nearby strong sources. Several of the remaining 68 unidentified sources have been noted as having positional agreement with supernovae remnants and OB associations. Others may be radio-quiet pulsars like Geminga, and still others may belong to a totally new class of sources. The question of the energy spectral distributions of these sources is an important clue to their identification. In this paper, the spectra of the sources within 10 deg of Galactic Plane are fit with three different functional forms; a single power law, two power laws, and a power law with an exponential cutoff. Where possible, the best fit is selected with statistical tests. Twelve, and possibly an additional 5 sources, are found to have spectra that are fit by a breaking power law or by the power law with exponential cutoff function.

  4. Transcription closed and open complex dynamics studies reveal balance between genetic determinants and co-factors

    NASA Astrophysics Data System (ADS)

    Sala, Adrien; Shoaib, Muhammad; Anufrieva, Olga; Mutharasu, Gnanavel; Jahan Hoque, Rawnak; Yli-Harja, Olli; Kandhavelu, Meenakshisundaram

    2015-05-01

    In E. coli, promoter closed and open complexes are key steps in transcription initiation, where magnesium-dependent RNA polymerase catalyzes RNA synthesis. However, the exact mechanism of initiation remains to be fully elucidated. Here, using single mRNA detection and dual reporter studies, we show that increased intracellular magnesium concentration affects Plac initiation complex formation resulting in a highly dynamic process over the cell growth phases. Mg2+ regulates transcription transition, which modulates bimodality of mRNA distribution in the exponential phase. We reveal that Mg2+ regulates the size and frequency of the mRNA burst by changing the open complex duration. Moreover, increasing magnesium concentration leads to higher intrinsic and extrinsic noise in the exponential phase. RNAP-Mg2+ interaction simulation reveals critical movements creating a shorter contact distance between aspartic acid residues and Nucleotide Triphosphate residues and increasing electrostatic charges in the active site. Our findings provide unique biophysical insights into the balanced mechanism of genetic determinants and magnesium ion in transcription initiation regulation during cell growth.

  5. Analytical model of coincidence resolving time in TOF-PET

    NASA Astrophysics Data System (ADS)

    Wieczorek, H.; Thon, A.; Dey, T.; Khanin, V.; Rodnyi, P.

    2016-06-01

    The coincidence resolving time (CRT) of scintillation detectors is the parameter determining noise reduction in time-of-flight PET. We derive an analytical CRT model based on the statistical distribution of photons for two different prototype scintillators. For the first one, characterized by single exponential decay, CRT is proportional to the decay time and inversely proportional to the number of photons, with a square root dependence on the trigger level. For the second scintillator prototype, characterized by exponential rise and decay, CRT is proportional to the square root of the product of rise time and decay time divided by the doubled number of photons, and it is nearly independent of the trigger level. This theory is verified by measurements of scintillation time constants, light yield and CRT on scintillator sticks. Trapping effects are taken into account by defining an effective decay time. We show that in terms of signal-to-noise ratio, CRT is as important as patient dose, imaging time or PET system sensitivity. The noise reduction effect of better timing resolution is verified and visualized by Monte Carlo simulation of a NEMA image quality phantom.

  6. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  7. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  8. Markov chain formalism for generalized radiative transfer in a plane-parallel medium, accounting for polarization

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Davis, Anthony B.; Diner, David J.

    2016-11-01

    A Markov chain formalism is developed for computing the transport of polarized radiation according to Generalized Radiative Transfer (GRT) theory, which was developed recently to account for unresolved random fluctuations of scattering particle density and can also be applied to unresolved spectral variability of gaseous absorption as an improvement over the standard correlated-k method. Using Gamma distribution to describe the probability density function of the extinction or absorption coefficient, a shape parameter a that quantifies the variability is introduced, defined as the mean extinction or absorption coefficient squared divided by its variance. It controls the decay rate of a power-law transmission that replaces the usual exponential Beer-Lambert-Bouguer law. Exponential transmission, hence classic RT, is recovered when a→∞. The new approach is verified to high accuracy against numerical benchmark results obtained with a custom Monte Carlo method. For a<∞, angular reciprocity is violated to a degree that increases with the spatial variability, as observed for finite portions of real-world cloudy scenes. While the degree of linear polarization in liquid water cloudbows, supernumerary bows, and glories is affected by spatial heterogeneity, the positions in scattering angle of these features are relatively unchanged. As a result, a single-scattering model based on the assumption of subpixel homogeneity can still be used to derive droplet size distributions from polarimetric measurements of extended stratocumulus clouds.

  9. Nonlinear dynamic evolution and control in CCFN with mixed attachment mechanisms

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Jianping; Han, Dun

    2017-01-01

    In recent years, wireless communication plays an important role in our lives. Cooperative communication, is used by a mobile station with single antenna to share with each other forming a virtual MIMO antenna system, will become a development with a diversity gain for wireless communication in tendency future. In this paper, a fitness model of evolution network based on complex networks with mixed attachment mechanisms is devised in order to study an actual network-CCFN (cooperative communication fitness network). Firstly, the evolution of CCFN is given by four cases with different probabilities, and the rate equations of nodes degree are presented to analyze the evolution of CCFN. Secondly, the degree distribution is analyzed by calculating the rate equation and numerical simulation with the examples of four fitness distributions such as power law, uniform fitness distribution, exponential fitness distribution and Rayleigh fitness distribution. Finally, the robustness of CCFN is studied by numerical simulation with four fitness distributions under random attack and intentional attack to analyze the effects of degree distribution, average path length and average degree. The results of this paper offers insights for building CCFN systems in order to program communication resources.

  10. Heterogeneous characters modeling of instant message services users’ online behavior

    PubMed Central

    Fang, Yajun; Horn, Berthold

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327

  11. Heterogeneous characters modeling of instant message services users' online behavior.

    PubMed

    Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  12. Impact of oxide thickness on the density distribution of near-interface traps in 4H-SiC MOS capacitors

    NASA Astrophysics Data System (ADS)

    Zhang, Xufang; Okamoto, Dai; Hatakeyama, Tetsuo; Sometani, Mitsuru; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi

    2018-06-01

    The impact of oxide thickness on the density distribution of near-interface traps (NITs) in SiO2/4H-SiC structure was investigated. We used the distributed circuit model that had successfully explained the frequency-dependent characteristics of both capacitance and conductance under strong accumulation conditions for SiO2/4H-SiC MOS capacitors with thick oxides by assuming an exponentially decaying distribution of NITs. In this work, it was found that the exponentially decaying distribution is the most plausible approximation of the true NIT distribution because it successfully explained the frequency dependences of capacitance and conductance under strong accumulation conditions for various oxide thicknesses. The thickness dependence of the NIT density distribution was also characterized. It was found that the NIT density increases with increasing oxide thickness, and a possible physical reason was discussed.

  13. Scaling behavior of sleep-wake transitions across species

    NASA Astrophysics Data System (ADS)

    Lo, Chung-Chuan; Chou, Thomas; Ivanov, Plamen Ch.; Penzel, Thomas; Mochizuki, Takatoshi; Scammell, Thomas; Saper, Clifford B.; Stanley, H. Eugene

    2003-03-01

    Uncovering the mechanisms controlling sleep is a fascinating scientific challenge. It can be viewed as transitions of states of a very complex system, the brain. We study the time dynamics of short awakenings during sleep for three species: humans, rats and mice. We find, for all three species, that wake durations follow a power-law distribution, and sleep durations follow exponential distributions. Surprisingly, all three species have the same power-law exponent for the distribution of wake durations, but the exponential time scale of the distributions of sleep durations varies across species. We suggest that the dynamics of short awakenings are related to species-independent fluctuations of the system, while the dynamics of sleep is related to system-dependent mechanisms which change with species.

  14. Cast aluminium single crystals cross the threshold from bulk to size-dependent stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Krebs, J.; Rao, S. I.; Verheyden, S.; Miko, C.; Goodall, R.; Curtin, W. A.; Mortensen, A.

    2017-07-01

    Metals are known to exhibit mechanical behaviour at the nanoscale different to bulk samples. This transition typically initiates at the micrometre scale, yet existing techniques to produce micrometre-sized samples often introduce artefacts that can influence deformation mechanisms. Here, we demonstrate the casting of micrometre-scale aluminium single-crystal wires by infiltration of a salt mould. Samples have millimetre lengths, smooth surfaces, a range of crystallographic orientations, and a diameter D as small as 6 μm. The wires deform in bursts, at a stress that increases with decreasing D. Bursts greater than 200 nm account for roughly 50% of wire deformation and have exponentially distributed intensities. Dislocation dynamics simulations show that single-arm sources that produce large displacement bursts halted by stochastic cross-slip and lock formation explain microcast wire behaviour. This microcasting technique may be extended to several other metals or alloys and offers the possibility of exploring mechanical behaviour spanning the micrometre scale.

  15. Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes

    DOE PAGES

    Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; ...

    2015-11-17

    Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less

  16. Global synchronization of memristive neural networks subject to random disturbances via distributed pinning control.

    PubMed

    Guo, Zhenyuan; Yang, Shaofu; Wang, Jun

    2016-12-01

    This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  18. Bonus-Malus System with the Claim Frequency Distribution is Geometric and the Severity Distribution is Truncated Weibull

    NASA Astrophysics Data System (ADS)

    Santi, D. N.; Purnaba, I. G. P.; Mangku, I. W.

    2016-01-01

    Bonus-Malus system is said to be optimal if it is financially balanced for insurance companies and fair for policyholders. Previous research about Bonus-Malus system concern with the determination of the risk premium which applied to all of the severity that guaranteed by the insurance company. In fact, not all of the severity that proposed by policyholder may be covered by insurance company. When the insurance company sets a maximum bound of the severity incurred, so it is necessary to modify the model of the severity distribution into the severity bound distribution. In this paper, optimal Bonus-Malus system is compound of claim frequency component has geometric distribution and severity component has truncated Weibull distribution is discussed. The number of claims considered to follow a Poisson distribution, and the expected number λ is exponentially distributed, so the number of claims has a geometric distribution. The severity with a given parameter θ is considered to have a truncated exponential distribution is modelled using the Levy distribution, so the severity have a truncated Weibull distribution.

  19. Asymptotic radiance and polarization in optically thick media: ocean and clouds.

    PubMed

    Kattawar, G W; Plass, G N

    1976-12-01

    Deep in a homogeneous medium that both scatters and absorbs photons, such as a cloud, the ocean, or a thick planetary atmosphere, the radiance decreases exponentially with depth, while the angular dependence of the radiance and polarization is independent of depth. In this diffusion region, the asymptotic radiance and polarization are also independent of the incident distribution of radiation at the upper surface of the medium. An exact expression is derived for the asymptotic radiance and polarization for Rayleigh scattering. The approximate expression for the asymptotic radiance derived from the scalar theory is shown to be in error by as much as 16.4%. An exact expression is also derived for the relation between the diffusion exponent k and the single scattering albedo. A method is developed for the numerical calculation of the asymptotic radiance and polarization for any scattering matrix. Results are given for scattering from the haze L and cloud C3 distributions for a wide range of single scattering albedos. When the absorption is large, the polarization in the diffusion region approaches the values obtained for single scattered photons, while the radiance approaches the value calculated from the expression: phase function divided by (1 + kmicro), where micro is the cosine of the zenith angle. The asymptotic distribution of the radiation is of interest since it depends only on the inherent optical properties of the medium. It is, however, difficult to observe when the absorption is large because of the very low radiance values in the diffusion region.

  20. Extreme event statistics in a drifting Markov chain

    NASA Astrophysics Data System (ADS)

    Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur

    2017-07-01

    We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.

  1. Unlimited multistability in multisite phosphorylation systems.

    PubMed

    Thomson, Matthew; Gunawardena, Jeremy

    2009-07-09

    Reversible phosphorylation on serine, threonine and tyrosine is the most widely studied posttranslational modification of proteins. The number of phosphorylated sites on a protein (n) shows a significant increase from prokaryotes, with n /= 150 sites. Multisite phosphorylation has many roles and site conservation indicates that increasing numbers of sites cannot be due merely to promiscuous phosphorylation. A substrate with n sites has an exponential number (2(n)) of phospho-forms and individual phospho-forms may have distinct biological effects. The distribution of these phospho-forms and how this distribution is regulated have remained unknown. Here we show that, when kinase and phosphatase act in opposition on a multisite substrate, the system can exhibit distinct stable phospho-form distributions at steady state and that the maximum number of such distributions increases with n. Whereas some stable distributions are focused on a single phospho-form, others are more diffuse, giving the phospho-proteome the potential to behave as a fluid regulatory network able to encode information and flexibly respond to varying demands. Such plasticity may underlie complex information processing in eukaryotic cells and suggests a functional advantage in having many sites. Our results follow from the unusual geometry of the steady-state phospho-form concentrations, which we show to constitute a rational algebraic curve, irrespective of n. We thereby reduce the complexity of calculating steady states from simulating 3 x 2(n) differential equations to solving two algebraic equations, while treating parameters symbolically. We anticipate that these methods can be extended to systems with multiple substrates and multiple enzymes catalysing different modifications, as found in posttranslational modification 'codes' such as the histone code. Whereas simulations struggle with exponentially increasing molecular complexity, mathematical methods of the kind developed here can provide a new language in which to articulate the principles of cellular information processing.

  2. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  3. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  4. Social diversity promotes cooperation in spatial multigames

    NASA Astrophysics Data System (ADS)

    Qin, Jiahu; Chen, Yaming; Kang, Yu; Perc, Matjaž

    2017-04-01

    Social diversity is omnipresent in the modern world. Here we introduce this diversity into spatial multigames and study its impact on the evolution of cooperation. Multigames are characterized by two or more different social dilemmas being contested among players in the population. When a fraction of players plays the prisoner's dilemma game while the remainder plays the snowdrift game cooperation becomes a difficult proposition. We show that social diversity, determined by the payoff scaling factors from the uniform, exponential or power-law distribution, significantly promotes cooperation. In particular, the stronger the social diversity, the more widespread cooperative behavior becomes. Monte Carlo simulations on the square lattice reveal that a power-law distribution of social diversity is in fact optimal for socially favorable states, thus resonating with findings previously reported for single social dilemmas. We also show that the same promotion mechanism works in time-varying environments, thus further generalizing the important role of social diversity for cooperation in social dilemmas.

  5. Microgels: Structure, Dynamics, and Possible Applications.

    NASA Astrophysics Data System (ADS)

    McKenna, John; Streletzky, Kiril

    2007-03-01

    We cross-linked Hydropxypropylcellulose (HPC) polymer chains to produce microgel nanoparticles and studied their structure and dynamics using Dynamic Light Scattering spectroscopy. The complex nature of the fluid and large size distribution of the particles renders typical characterization algorithm CONTIN ineffective and inconsistent. Instead, the particles spectra have been fit to a sum of stretched exponentials. Each term offers three parameters for analysis and represents a single mode. The results of this analysis show that the microgels undergo a transition to a fewer modes around 41C. The CONTIN size distribution analysis shows similar results, but these come with much less consistency and resolution. Our experiments prove that microgel particles shrink under volume phase transition. The shrinkage is reversible and depends on the amount of cross-linker, salt and polymer concentrations and rate of heating. Reversibility of microgel volume phase transition property might be particularly useful for a controlled drug delivery and release.

  6. Nucleation study for an undercooled melt of intermetallic NiZr

    NASA Astrophysics Data System (ADS)

    Kobold, R.; Kolbe, M.; Hornfeck, W.; Herlach, D. M.

    2018-03-01

    Electrostatic levitation is applied in order to undercool liquid glass forming NiZr significantly below its melting temperature. For NiZr large undercoolings are found to be highly reproducible with this experimental method. One single NiZr sample of high purity is undercooled 200 consecutive times which leads to a distribution function of undercooling temperatures. Within a statistical approach of classical nucleation theory, the undercooling distribution is analyzed yielding parameters, e.g., a pre-exponential factor of KV ≈ 1035 m-3 s-1, which indicates homogeneous nucleation. This result is consistent with the crystallization behavior of NiZr at high undercooling and with the corresponding microstructural analysis. Since NiZr is a representative of the very common CrB structure type, with 132 isostructural phases existing, understanding its nucleation behavior adds important knowledge to the nucleation of binary alloys in general.

  7. Modeling the Role of Dislocation Substructure During Class M and Exponential Creep. Revised

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Iskovitz, Ilana Seiden; Freed, A. D.

    1995-01-01

    The different substructures that form in the power-law and exponential creep regimes for single phase crystalline materials under various conditions of stress, temperature and strain are reviewed. The microstructure is correlated both qualitatively and quantitatively with power-law and exponential creep as well as with steady state and non-steady state deformation behavior. These observations suggest that creep is influenced by a complex interaction between several elements of the microstructure, such as dislocations, cells and subgrains. The stability of the creep substructure is examined in both of these creep regimes during stress and temperature change experiments. These observations are rationalized on the basis of a phenomenological model, where normal primary creep is interpreted as a series of constant structure exponential creep rate-stress relationships. The implications of this viewpoint on the magnitude of the stress exponent and steady state behavior are discussed. A theory is developed to predict the macroscopic creep behavior of a single phase material using quantitative microstructural data. In this technique the thermally activated deformation mechanisms proposed by dislocation physics are interlinked with a previously developed multiphase, three-dimensional. dislocation substructure creep model. This procedure leads to several coupled differential equations interrelating macroscopic creep plasticity with microstructural evolution.

  8. Dynamics of optical matter creation and annihilation in colloidal liquids controlled by laser trapping power.

    PubMed

    Liu, Jin; Dai, Qiao-Feng; Huang, Xu-Guang; Wu, Li-Jun; Guo, Qi; Hu, Wei; Yang, Xiang-Bo; Lan, Sheng; Gopal, Achanta Venu; Trofimov, Vyacheslav A

    2008-11-15

    We investigate the dynamics of optical matter creation and annihilation in a colloidal liquid that was employed to construct an all-optical switch. It is revealed that the switching-on process can be characterized by the Fermi-Dirac distribution function, while the switching-off process can be described by a steady state followed by a single exponential decay. The phase transition times exhibit a strong dependence on trapping power. With an increasing trapping power, while the switching-on time decreases rapidly, the switch-off time increases significantly, indicating the effects of optical binding and van der Waals force on the lifetime of the optical matter.

  9. Coagulation-Fragmentation Model for Animal Group-Size Statistics

    NASA Astrophysics Data System (ADS)

    Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.

    2017-04-01

    We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.

  10. Analysis of two production inventory systems with buffer, retrials and different production rates

    NASA Astrophysics Data System (ADS)

    Jose, K. P.; Nair, Salini S.

    2017-09-01

    This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.

  11. A non-Boltzmannian behavior of the energy distribution for quasi-stationary regimes of the Fermi–Pasta–Ulam β system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leo, Mario, E-mail: mario.leo@le.infn.it; Leo, Rosario Antonio, E-mail: leora@le.infn.it; Tempesta, Piergiulio, E-mail: p.tempesta@fis.ucm.es

    2013-06-15

    In a recent paper [M. Leo, R.A. Leo, P. Tempesta, C. Tsallis, Phys. Rev. E 85 (2012) 031149], the existence of quasi-stationary states for the Fermi–Pasta–Ulam β system has been shown numerically, by analyzing the stability properties of the N/4-mode exact nonlinear solution. Here we study the energy distribution of the modes N/4, N/3 and N/2, when they are unstable, as a function of N and of the initial excitation energy. We observe that the classical Boltzmann weight is replaced by a different weight, expressed by a q-exponential function. -- Highlights: ► New statistical properties of the Fermi–Pasta–Ulam beta systemmore » are found. ► The energy distribution of specific observables are studied: a deviation from the standard Boltzmann behavior is found. ► A q-exponential weight should be used instead. ► The classical exponential weight is restored in the large particle limit (mesoscopic nature of the phenomenon)« less

  12. Statistical analyses support power law distributions found in neuronal avalanches.

    PubMed

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  13. Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.

    PubMed

    Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng

    2013-01-01

    Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.

  14. Non-extensive quantum statistics with particle-hole symmetry

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Shen, K. M.; Zhang, B. W.

    2015-06-01

    Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.

  15. Wealth distribution, Pareto law, and stretched exponential decay of money: Computer simulations analysis of agent-based models

    NASA Astrophysics Data System (ADS)

    Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf

    2018-01-01

    We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.

  16. Crack problem in superconducting cylinder with exponential distribution of critical-current density

    NASA Astrophysics Data System (ADS)

    Zhao, Yufeng; Xu, Chi; Shi, Liang

    2018-04-01

    The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.

  17. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  18. Effect of the state of internal boundaries on granite fracture nature under quasi-static compression

    NASA Astrophysics Data System (ADS)

    Damaskinskaya, E. E.; Panteleev, I. A.; Kadomtsev, A. G.; Naimark, O. B.

    2017-05-01

    Based on an analysis of the spatial distribution of hypocenters of acoustic emission signal sources and an analysis of the energy distributions of acoustic emission signals, the effect of the liquid phase and a weak electric field on the spatiotemporal nature of granite sample fracture is studied. Experiments on uniaxial compression of granite samples of natural moisture showed that the damage accumulation process is twostage: disperse accumulation of damages is followed by localized accumulation of damages in the formed macrofracture nucleus region. In energy distributions of acoustic emission signals, this transition is accompanied by a change in the distribution shape from exponential to power-law. Granite water saturation qualitatively changes the damage accumulation nature: the process is delocalized until macrofracture with the exponential energy distribution of acoustic emission signals. An exposure to a weak electric field results in a selective change in the damage accumulation nature in the sample volume.

  19. Turbulence hierarchy in a random fibre laser

    PubMed Central

    González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman

    2017-01-01

    Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    Rank distributions are collections of positive sizes ordered either increasingly or decreasingly. Many decreasing rank distributions, formed by the collective collaboration of human actions, follow an inverse power-law relation between ranks and sizes. This remarkable empirical fact is termed Zipf’s law, and one of its quintessential manifestations is the demography of human settlements — which exhibits a harmonic relation between ranks and sizes. In this paper we present a comprehensive statistical-physics analysis of rank distributions, establish that power-law and exponential rank distributions stand out as optimal in various entropy-based senses, and unveil the special role of the harmonic relation betweenmore » ranks and sizes. Our results extend the contemporary entropy-maximization view of Zipf’s law to a broader, panoramic, Gibbsian perspective of increasing and decreasing power-law and exponential rank distributions — of which Zipf’s law is one out of four pillars.« less

  1. Fluorescence lifetimes of anthracycline drugs in phospholipid bilayers determined by frequency-domain fluorometry

    NASA Astrophysics Data System (ADS)

    Burke, Thomas G.; Malak, Henryk M.; Doroshow, James H.

    1990-05-01

    Time-resolved fluorescence intensity decay data from anthracycline anticancer drugs present in model membranes were obtained using a gigahertz frequency-domain fluorometer [Lakowicz et al. (1986) Rev. Sci. Instrum. 57, 2499-2506]. Exciting light of 290 nm, modulated at multiple frequencies from 8 MHz to 400 MHz, was used to study the interactions of Adriamycin, daunomycin and related antibiotics with small unilamellar vesicles composed of dimyristoylphosphatidylcholine (DMPC) at 28°C. Fluorescence decay data for drug molecules free in solution as well as bound to membranes were best fit by exponentials requiring two terms rather than by single exponential decays. For example, one-component analysis of the decay data for Adriamycin free in phosphate buffered saline (PBS) solution resulted in a reduced x2 value of 140 ((tau) = 0.88 ns), while a two-component fit resulted in a substantially smaller reduced x2 value of 2.6 ((tau)1 = 1.13 ns, (alpha)1 = 0.60, (tau)2 = 0.30 ns). Upon association with membranes, each of the anthracyclines studied displayed a larger r1 value while the r2 value remained the same or increased (for example, DMPC-bound Adriamycin showed r1 = 1.68 ns , a1 = 0 . 64 , r2 = 0 . 33 ns) . Analyses of the fluorescence emission decays of anthracyclines were also made assuming each decay is composed of a single Lorentzian distribution of lifetimes. Data taken on Adriamycin in PBS, when fit using one continuous component, displayed (tau), (alpha), w, and reduced x2 values of 0.68 ns, 1, 0.60 ns, and 9.1, respectively. The distribution became quite broad upon drug association with membrane (DMPCbound Adriamycin: (tau) = 0.75 ns, (alpha) = 1, w = 2.24 ns, x2 = 13). For each anthracycline studied, continuous component fits showed significant broadening in the distributions upon drug association with membrane. Relatively large shifts in lifetime values were observed for the carminomycin and 4-demethoxydaunomycin analogues upon binding model lipid membranes, making these agents good candidates to employ in future studies on anthracycline interactions with more environmentally-complex biological membranes.

  2. Proton sensitivity of rat cerebellar granule cell GABAA receptors: dependence on neuronal development

    PubMed Central

    Krishek, Belinda J; Smart, Trevor G

    2001-01-01

    The effect of GABAA receptor development in culture on the modulation of GABA-induced currents by external H+ was examined in cerebellar granule cells using whole-cell and single-channel recording. Equilibrium concentration-response curves revealed a lower potency for GABA between 11 and 12 days in vitro (DIV) resulting in a shift of the EC50 from 10.7 to 2.4 μM. For granule cells before 11 DIV, the peak GABA-activated current was inhibited at low external pH and enhanced at high pH with a pKa of 6.65. For the steady-state response, low pH was inhibitory with a pKa of 5.56. After 11 DIV, the peak GABA-activated current was largely pH insensitive; however, the steady-state current was potentiated at low pH with a pKa of 6.84. Single GABA-activated ion channels were recorded from outside-out patches of granule cell bodies. At pH 5.4-9.4, single GABA channels exhibited multiple conductance states occurring at 22-26, 16-17 and 12-14 pS. The conductance levels were not significantly altered over the time period of study, nor by changing the external H+ concentration. Two exponential functions were required to fit the open-time frequency histograms at both early (< 11 DIV) and late (> 11 DIV) development times at each H+ concentration. The short and long open time constants were unaffected either by the extracellular H+ concentration or by neuronal development. The distribution of all shut times was fitted by the sum of three exponentials designated as short, intermediate and long. At acidic pH, the long shut time constant decreased with development as did the relative contribution of these components to the overall distribution. This was concurrent with an increase in the mean probability of channel opening. In conclusion, this study demonstrates in cerebellar granule cells that external pH can either reduce, have no effect on, or enhance GABA-activated responses depending on the stage of development, possibly related to the subunit composition of the GABAA receptors. The mode of interaction of H+ at the single-channel level and implications of such interactions at cerebellar granule cell GABAA receptors are discussed. PMID:11208970

  3. The modifier effects of chymotrypsin and trypsin enzymes on fluorescence lifetime distribution of "N-(1-pyrenyl)maleimide-bovine serum albumin" complex

    NASA Astrophysics Data System (ADS)

    Özyiğit, İbrahim Ethem; Karakuş, Emine; Pekcan, Önder

    2016-02-01

    Chymotrypsin and trypsin are the well known proteolytic enzymes, both of which are synthesized in the pancreas as their precursors - the inactive forms; chymotrypsinogen and trypsinogen - and then are released into the duodenum to cut proteins into smaller peptides. In this paper, the effects of activities of chymotrypsin and trypsin enzymes on fluorescence lifetime distributions of the substrat bovine serum albumin (BSA) modified with N-(1-pyrenyl)maleimide (PM) were examined. In the labeling study of BSA with PM, it is aimed to attach PM to the single free thiol (Cys34) and to all the free amine groups in accessible positions in order to produce excimers of pyrene planes of the possible highest amount to form the lifetime distributions in the widest range, that may show specifically distinguishing changes resulting from the activities of the proteases. The time resolved spectrofluorometer was used to monitor fluorescence decays, which were analyzed by using the exponential series method (ESM) to obtain the changes of lifetime distributions. After the exposure of the synthesized substrat PM-BSA to the enzymes, the fluorescence lifetime distributions exhibited different structures which were attributed to the different activities of the proteases.

  4. Global exponential stability analysis on impulsive BAM neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Li, Yao-Tang; Yang, Chang-Bo

    2006-12-01

    Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.

  5. Existence and global exponential stability of periodic solution to BAM neural networks with periodic coefficients and continuously distributed delays

    NASA Astrophysics Data System (ADS)

    Zhou, distributed delays [rapid communication] T.; Chen, A.; Zhou, Y.

    2005-08-01

    By using the continuation theorem of coincidence degree theory and Liapunov function, we obtain some sufficient criteria to ensure the existence and global exponential stability of periodic solution to the bidirectional associative memory (BAM) neural networks with periodic coefficients and continuously distributed delays. These results improve and generalize the works of papers [J. Cao, L. Wang, Phys. Rev. E 61 (2000) 1825] and [Z. Liu, A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 50 (2003) 1162]. An example is given to illustrate that the criteria are feasible.

  6. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  7. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme

    NASA Astrophysics Data System (ADS)

    Bakoban, Rana A.

    2017-08-01

    The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.

  8. Intermittent electron density and temperature fluctuations and associated fluxes in the Alcator C-Mod scrape-off layer

    NASA Astrophysics Data System (ADS)

    Kube, R.; Garcia, O. E.; Theodorsen, A.; Brunner, D.; Kuang, A. Q.; LaBombard, B.; Terry, J. L.

    2018-06-01

    The Alcator C-Mod mirror Langmuir probe system has been used to sample data time series of fluctuating plasma parameters in the outboard mid-plane far scrape-off layer. We present a statistical analysis of one second long time series of electron density, temperature, radial electric drift velocity and the corresponding particle and electron heat fluxes. These are sampled during stationary plasma conditions in an ohmically heated, lower single null diverted discharge. The electron density and temperature are strongly correlated and feature fluctuation statistics similar to the ion saturation current. Both electron density and temperature time series are dominated by intermittent, large-amplitude burst with an exponential distribution of both burst amplitudes and waiting times between them. The characteristic time scale of the large-amplitude bursts is approximately 15 μ {{s}}. Large-amplitude velocity fluctuations feature a slightly faster characteristic time scale and appear at a faster rate than electron density and temperature fluctuations. Describing these time series as a superposition of uncorrelated exponential pulses, we find that probability distribution functions, power spectral densities as well as auto-correlation functions of the data time series agree well with predictions from the stochastic model. The electron particle and heat fluxes present large-amplitude fluctuations. For this low-density plasma, the radial electron heat flux is dominated by convection, that is, correlations of fluctuations in the electron density and radial velocity. Hot and dense blobs contribute only a minute fraction of the total fluctuation driven heat flux.

  9. Local spin dynamics at low temperature in the slowly relaxing molecular chain [Dy(hfac)3{NIT(C6H4OPh)}]: A μ+ spin relaxation study

    NASA Astrophysics Data System (ADS)

    Arosio, Paolo; Corti, Maurizio; Mariani, Manuel; Orsini, Francesco; Bogani, Lapo; Caneschi, Andrea; Lago, Jorge; Lascialfari, Alessandro

    2015-05-01

    The spin dynamics of the molecular magnetic chain [Dy(hfac)3{NIT(C6H4OPh)}] were investigated by means of the Muon Spin Relaxation (μ+SR) technique. This system consists of a magnetic lattice of alternating Dy(III) ions and radical spins, and exhibits single-chain-magnet behavior. The magnetic properties of [Dy(hfac)3{NIT(C6H4OPh)}] have been studied by measuring the magnetization vs. temperature at different applied magnetic fields (H = 5, 3500, and 16500 Oe) and by performing μ+SR experiments vs. temperature in zero field and in a longitudinal applied magnetic field H = 3500 Oe. The muon asymmetry P(t) was fitted by the sum of three components, two stretched-exponential decays with fast and intermediate relaxation times, and a third slow exponential decay. The temperature dependence of the spin dynamics has been determined by analyzing the muon longitudinal relaxation rate λinterm(T), associated with the intermediate relaxing component. The experimental λinterm(T) data were fitted with a corrected phenomenological Bloembergen-Purcell-Pound law by using a distribution of thermally activated correlation times, which average to τ = τ0 exp(Δ/kBT), corresponding to a distribution of energy barriers Δ. The correlation times can be associated with the spin freezing that occurs when the system condenses in the ground state.

  10. Deuteron spin-lattice relaxation in the presence of an activation energy distribution: application to methanols in zeolite NaX.

    PubMed

    Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M

    2013-02-01

    A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Apparent power-law distributions in animal movements can arise from intraspecific interactions

    PubMed Central

    Breed, Greg A.; Severns, Paul M.; Edwards, Andrew M.

    2015-01-01

    Lévy flights have gained prominence for analysis of animal movement. In a Lévy flight, step-lengths are drawn from a heavy-tailed distribution such as a power law (PL), and a large number of empirical demonstrations have been published. Others, however, have suggested that animal movement is ill fit by PL distributions or contend a state-switching process better explains apparent Lévy flight movement patterns. We used a mix of direct behavioural observations and GPS tracking to understand step-length patterns in females of two related butterflies. We initially found movement in one species (Euphydryas editha taylori) was best fit by a bounded PL, evidence of a Lévy flight, while the other (Euphydryas phaeton) was best fit by an exponential distribution. Subsequent analyses introduced additional candidate models and used behavioural observations to sort steps based on intraspecific interactions (interactions were rare in E. phaeton but common in E. e. taylori). These analyses showed a mixed-exponential is favoured over the bounded PL for E. e. taylori and that when step-lengths were sorted into states based on the influence of harassing conspecific males, both states were best fit by simple exponential distributions. The direct behavioural observations allowed us to infer the underlying behavioural mechanism is a state-switching process driven by intraspecific interactions rather than a Lévy flight. PMID:25519992

  12. On the heterogeneity of fluorescence lifetime of room temperature ionic liquids: onset of a journey for exploring red emitting dyes.

    PubMed

    Ghosh, Anup; Chatterjee, Tanmay; Mandal, Prasun K

    2012-06-25

    An excitation and emission wavelength dependent non-exponential fluorescence decay behaviour of room temperature ionic liquids (RTILs) has been noted. Average fluorescence lifetimes have been found to vary by a factor of three or more. Red emitting dyes dissolved in RTILs are found to follow hitherto unobserved single exponential fluorescence decay behaviour.

  13. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  14. Race, gender and the econophysics of income distribution in the USA

    NASA Astrophysics Data System (ADS)

    Shaikh, Anwar; Papanikolaou, Nikolaos; Wiener, Noe

    2014-12-01

    The econophysics “two-class” theory of Yakovenko and his co-authors shows that the distribution of labor incomes is roughly exponential. This paper extends this result to US subgroups categorized by gender and race. It is well known that Males have higher average incomes than Females, and Whites have higher average incomes than African-Americans. It is also evident that social policies can affect these income gaps. Our surprising finding is that nonetheless intra-group distributions of pre-tax labor incomes are remarkably similar and remain close to exponential. This suggests that income inequality can be usefully addressed by taxation policies, and overall income inequality can be modified by also shifting the balance between labor and property incomes.

  15. Multi-Step Fibrinogen Binding to the Integrin αIIbβ3 Detected Using Force Spectroscopy

    PubMed Central

    Litvinov, Rustem I.; Bennett, Joel S.; Weisel, John W.; Shuman, Henry

    2005-01-01

    The regulated ability of integrin αIIbβ3 to bind fibrinogen plays a crucial role in platelet aggregation and hemostasis. We have developed a model system based on laser tweezers, enabling us to measure specific rupture forces needed to separate single receptor-ligand complexes. First of all, we performed a thorough and statistically representative analysis of nonspecific protein-protein binding versus specific αIIbβ3-fibrinogen interactions in combination with experimental evidence for single-molecule measurements. The rupture force distribution of purified αIIbβ3 and fibrinogen, covalently attached to underlying surfaces, ranged from ∼20 to 150 pN. This distribution could be fit with a sum of an exponential curve for weak to moderate (20–60 pN) forces, and a Gaussian curve for strong (>60 pN) rupture forces that peaked at 80–90 pN. The interactions corresponding to these rupture force regimes differed in their susceptibility to αIIbβ3 antagonists or Mn2+, an αIIbβ3 activator. Varying the surface density of fibrinogen changed the total binding probability linearly >3.5-fold but did not affect the shape of the rupture force distribution, indicating that the measurements represent single-molecule binding. The yield strength of αIIbβ3-fibrinogen interactions was independent of the loading rate (160–16,000 pN/s), whereas their binding probability markedly correlated with the duration of contact. The aggregate of data provides evidence for complex multi-step binding/unbinding pathways of αIIbβ3 and fibrinogen revealed at the single-molecule level. PMID:16040750

  16. A new approach to the extraction of single exponential diode model parameters

    NASA Astrophysics Data System (ADS)

    Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.

    2018-06-01

    A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.

  17. Making sense of snapshot data: ergodic principle for clonal cell populations

    PubMed Central

    2017-01-01

    Population growth is often ignored when quantifying gene expression levels across clonal cell populations. We develop a framework for obtaining the molecule number distributions in an exponentially growing cell population taking into account its age structure. In the presence of generation time variability, the average acquired across a population snapshot does not obey the average of a dividing cell over time, apparently contradicting ergodicity between single cells and the population. Instead, we show that the variation observed across snapshots with known cell age is captured by cell histories, a single-cell measure obtained from tracking an arbitrary cell of the population back to the ancestor from which it originated. The correspondence between cells of known age in a population with their histories represents an ergodic principle that provides a new interpretation of population snapshot data. We illustrate the principle using analytical solutions of stochastic gene expression models in cell populations with arbitrary generation time distributions. We further elucidate that the principle breaks down for biochemical reactions that are under selection, such as the expression of genes conveying antibiotic resistance, which gives rise to an experimental criterion with which to probe selection on gene expression fluctuations. PMID:29187636

  18. Making sense of snapshot data: ergodic principle for clonal cell populations.

    PubMed

    Thomas, Philipp

    2017-11-01

    Population growth is often ignored when quantifying gene expression levels across clonal cell populations. We develop a framework for obtaining the molecule number distributions in an exponentially growing cell population taking into account its age structure. In the presence of generation time variability, the average acquired across a population snapshot does not obey the average of a dividing cell over time, apparently contradicting ergodicity between single cells and the population. Instead, we show that the variation observed across snapshots with known cell age is captured by cell histories, a single-cell measure obtained from tracking an arbitrary cell of the population back to the ancestor from which it originated. The correspondence between cells of known age in a population with their histories represents an ergodic principle that provides a new interpretation of population snapshot data. We illustrate the principle using analytical solutions of stochastic gene expression models in cell populations with arbitrary generation time distributions. We further elucidate that the principle breaks down for biochemical reactions that are under selection, such as the expression of genes conveying antibiotic resistance, which gives rise to an experimental criterion with which to probe selection on gene expression fluctuations. © 2017 The Author(s).

  19. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    PubMed

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  20. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  1. An efficient and accurate technique to compute the absorption, emission, and transmission of radiation by the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.

    1990-01-01

    CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.

  2. OSSOS. II. A Sharp Transition in the Absolute Magnitude Distribution of the Kuiper Belt’s Scattering Population

    NASA Astrophysics Data System (ADS)

    Shankman, C.; Kavelaars, JJ.; Gladman, B. J.; Alexandersen, M.; Kaib, N.; Petit, J.-M.; Bannister, M. T.; Chen, Y.-T.; Gwyn, S.; Jakubik, M.; Volk, K.

    2016-02-01

    We measure the absolute magnitude, H, distribution, dN(H) ∝ 10 αH , of the scattering Trans-Neptunian Objects (TNOs) as a proxy for their size-frequency distribution. We show that the H-distribution of the scattering TNOs is not consistent with a single-slope distribution, but must transition around H g ˜ 9 to either a knee with a shallow slope or to a divot, which is a differential drop followed by second exponential distribution. Our analysis is based on a sample of 22 scattering TNOs drawn from three different TNO surveys—the Canada-France Ecliptic Plane Survey, Alexandersen et al., and the Outer Solar System Origins Survey, all of which provide well-characterized detection thresholds—combined with a cosmogonic model for the formation of the scattering TNO population. Our measured absolute magnitude distribution result is independent of the choice of cosmogonic model. Based on our analysis, we estimate that the number of scattering TNOs is (2.4-8.3) × 105 for H r < 12. A divot H-distribution is seen in a variety of formation scenarios and may explain several puzzles in Kuiper Belt science. We find that a divot H-distribution simultaneously explains the observed scattering TNO, Neptune Trojan, Plutino, and Centaur H-distributions while simultaneously predicting a large enough scattering TNO population to act as the sole supply of the Jupiter-Family Comets.

  3. Modulation of lens cell adhesion molecules by particle beams

    NASA Technical Reports Server (NTRS)

    McNamara, M. P.; Bjornstad, K. A.; Chang, P. Y.; Chou, W.; Lockett, S. J.; Blakely, E. A.

    2001-01-01

    Cell adhesion molecules (CAMs) are proteins which anchor cells to each other and to the extracellular matrix (ECM), but whose functions also include signal transduction, differentiation, and apoptosis. We are testing a hypothesis that particle radiations modulate CAM expression and this contributes to radiation-induced lens opacification. We observed dose-dependent changes in the expression of beta 1-integrin and ICAM-1 in exponentially-growing and confluent cells of a differentiating human lens epithelial cell model after exposure to particle beams. Human lens epithelial (HLE) cells, less than 10 passages after their initial culture from fetal tissue, were grown on bovine corneal endothelial cell-derived ECM in medium containing 15% fetal bovine serum and supplemented with 5 ng/ml basic fibroblast growth factor (FGF-2). Multiple cell populations at three different stages of differentiation were prepared for experiment: cells in exponential growth, and cells at 5 and 10 days post-confluence. The differentiation status of cells was characterized morphologically by digital image analysis, and biochemically by Western blotting using lens epithelial and fiber cell-specific markers. Cultures were irradiated with single doses (4, 8 or 12 Gy) of 55 MeV protons and, along with unirradiated control samples, were fixed using -20 degrees C methanol at 6 hours after exposure. Replicate experiments and similar experiments with helium ions are in progress. The intracellular localization of beta 1-integrin and ICAM-1 was detected by immunofluorescence using monoclonal antibodies specific for each CAM. Cells known to express each CAM were also processed as positive controls. Both exponentially-growing and confluent, differentiating cells demonstrated a dramatic proton-dose-dependent modulation (upregulation for exponential cells, downregulation for confluent cells) and a change in the intracellular distribution of the beta 1-integrin, compared to unirradiated controls. In contrast, there was a dose-dependent increase in ICAM-1 immunofluorescence in confluent, but not exponentially-growing cells. These results suggest that proton irradiation downregulates beta 1-integrin and upregulates ICAM-1, potentially contributing to cell death or to aberrant differentiation via modulation of anchorage and/or signal transduction functions. Quantification of the expression levels of the CAMs by Western analysis is in progress.

  4. Braid Entropy of Two-Dimensional Turbulence

    NASA Astrophysics Data System (ADS)

    Francois, Nicolas; Xia, Hua; Punzmann, Horst; Faber, Benjamin; Shats, Michael

    2015-12-01

    The evolving shape of material fluid lines in a flow underlies the quantitative prediction of the dissipation and material transport in many industrial and natural processes. However, collecting quantitative data on this dynamics remains an experimental challenge in particular in turbulent flows. Indeed the deformation of a fluid line, induced by its successive stretching and folding, can be difficult to determine because such description ultimately relies on often inaccessible multi-particle information. Here we report laboratory measurements in two-dimensional turbulence that offer an alternative topological viewpoint on this issue. This approach characterizes the dynamics of a braid of Lagrangian trajectories through a global measure of their entanglement. The topological length of material fluid lines can be derived from these braids. This length is found to grow exponentially with time, giving access to the braid topological entropy . The entropy increases as the square root of the turbulent kinetic energy and is directly related to the single-particle dispersion coefficient. At long times, the probability distribution of is positively skewed and shows strong exponential tails. Our results suggest that may serve as a measure of the irreversibility of turbulence based on minimal principles and sparse Lagrangian data.

  5. Prony series spectra of structural relaxation in N-BK7 for finite element modeling.

    PubMed

    Koontz, Erick; Blouin, Vincent; Wachtel, Peter; Musgraves, J David; Richardson, Kathleen

    2012-12-20

    Structural relaxation behavior of N-BK7 glass was characterized at temperatures 20 °C above and below T(12) for this glass, using a thermo mechanical analyzer (TMA). T(12) is a characteristic temperature corresponding to a viscosity of 10(12) Pa·s. The glass was subject to quick temperature down-jumps preceded and followed by long isothermal holds. The exponential-like decay of the sample height was recorded and fitted using a unique Prony series method. The result of his method was a plot of the fit parameters revealing the presence of four distinct peaks or distributions of relaxation times. The number of relaxation times decreased as final test temperature was increased. The relaxation times did not shift significantly with changing temperature; however, the Prony weight terms varied essentially linearly with temperature. It was also found that the structural relaxation behavior of the glass trended toward single exponential behavior at temperatures above the testing range. The result of the analysis was a temperature-dependent Prony series model that can be used in finite element modeling of glass behavior in processes such as precision glass molding (PGM).

  6. Exponential fading to white of black holes in quantum gravity

    NASA Astrophysics Data System (ADS)

    Barceló, Carlos; Carballo-Rubio, Raúl; Garay, Luis J.

    2017-05-01

    Quantization of the gravitational field may allow the existence of a decay channel of black holes into white holes with an explicit time-reversal symmetry. The definition of a meaningful decay probability for this channel is studied in spherically symmetric situations. As a first nontrivial calculation, we present the functional integration over a set of geometries using a single-variable function to interpolate between black-hole and white-hole geometries in a bounded region of spacetime. This computation gives a finite result which depends only on the Schwarzschild mass and a parameter measuring the width of the interpolating region. The associated probability distribution displays an exponential decay law on the latter parameter, with a mean lifetime inversely proportional to the Schwarzschild mass. In physical terms this would imply that matter collapsing to a black hole from a finite radius bounces back elastically and instantaneously, with negligible time delay as measured by external observers. These results invite to reconsider the ultimate nature of astrophysical black holes, providing a possible mechanism for the formation of black stars instead of proper general relativistic black holes. The existence of both this decay channel and black stars can be tested in future observations of gravitational waves.

  7. Braid Entropy of Two-Dimensional Turbulence

    PubMed Central

    Francois, Nicolas; Xia, Hua; Punzmann, Horst; Faber, Benjamin; Shats, Michael

    2015-01-01

    The evolving shape of material fluid lines in a flow underlies the quantitative prediction of the dissipation and material transport in many industrial and natural processes. However, collecting quantitative data on this dynamics remains an experimental challenge in particular in turbulent flows. Indeed the deformation of a fluid line, induced by its successive stretching and folding, can be difficult to determine because such description ultimately relies on often inaccessible multi-particle information. Here we report laboratory measurements in two-dimensional turbulence that offer an alternative topological viewpoint on this issue. This approach characterizes the dynamics of a braid of Lagrangian trajectories through a global measure of their entanglement. The topological length of material fluid lines can be derived from these braids. This length is found to grow exponentially with time, giving access to the braid topological entropy . The entropy increases as the square root of the turbulent kinetic energy and is directly related to the single-particle dispersion coefficient. At long times, the probability distribution of is positively skewed and shows strong exponential tails. Our results suggest that may serve as a measure of the irreversibility of turbulence based on minimal principles and sparse Lagrangian data. PMID:26689261

  8. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    NASA Astrophysics Data System (ADS)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  9. Estimation of total discharged mass from the phreatic eruption of Ontake Volcano, central Japan, on September 27, 2014

    NASA Astrophysics Data System (ADS)

    Takarada, Shinji; Oikawa, Teruki; Furukawa, Ryuta; Hoshizumi, Hideo; Itoh, Jun'ichi; Geshi, Nobuo; Miyagi, Isoji

    2016-08-01

    The total mass discharged by the phreatic eruption of Ontake Volcano, central Japan, on September 27, 2014, was estimated using several methods. The estimated discharged mass was 1.2 × 106 t (segment integration method), 8.9 × 105 t (Pyle's exponential method), and varied from 8.6 × 103 to 2.5 × 106 t (Hayakawa's single isopach method). The segment integration and Pyle's exponential methods gave similar values. The single isopach method, however, gave a wide range of results depending on which contour was used. Therefore, the total discharged mass of the 2014 eruption is estimated at between 8.9 × 105 and 1.2 × 106 t. More than 90 % of the total mass accumulated within the proximal area. This shows how important it is to include a proximal area field survey for the total mass estimation of phreatic eruptions. A detailed isopleth mass distribution map was prepared covering as far as 85 km from the source. The main ash-fall dispersal was ENE in the proximal and medial areas and E in the distal area. The secondary distribution lobes also extended to the S and NW proximally, reflecting the effects of elutriation ash and surge deposits from pyroclastic density currents during the phreatic eruption. The total discharged mass of the 1979 phreatic eruption was also calculated for comparison. The resulting volume of 1.9 × 106 t (using the segment integration method) indicates that it was about 1.6-2.1 times larger than the 2014 eruption. The estimated average discharged mass flux rate of the 2014 eruption was 1.7 × 108 kg/h and for the 1979 eruption was 1.0 × 108 kg/h. One of the possible reasons for the higher flux rate of the 2014 eruption is the occurrence of pyroclastic density currents at the summit area.

  10. Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.

    PubMed

    Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre

    2003-03-01

    A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.

  11. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  12. Compressed exponential relaxation in liquid silicon: Universal feature of the crossover from ballistic to diffusive behavior in single-particle dynamics

    NASA Astrophysics Data System (ADS)

    Morishita, Tetsuya

    2012-07-01

    We report a first-principles molecular-dynamics study of the relaxation dynamics in liquid silicon (l-Si) over a wide temperature range (1000-2200 K). We find that the intermediate scattering function for l-Si exhibits a compressed exponential decay above 1200 K including the supercooled regime, which is in stark contrast to that for normal "dense" liquids which typically show stretched exponential decay in the supercooled regime. The coexistence of particles having ballistic-like motion and those having diffusive-like motion is demonstrated, which accounts for the compressed exponential decay in l-Si. An attempt to elucidate the crossover from the ballistic to the diffusive regime in the "time-dependent" diffusion coefficient is made and the temperature-independent universal feature of the crossover is disclosed.

  13. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  14. Effect of Structural Relaxation on the In-Plane Electrical Resistance of Oxygen-Underdoped ReBaCuO (Re = Y, Ho) Single Crystals

    NASA Astrophysics Data System (ADS)

    Vovk, Ruslan V.; Vovk, Nikolaj R.; Dobrovolskiy, Oleksandr V.

    2014-05-01

    The effect of jumpwise temperature variation and room-temperature storing on the basal-plane electrical resistivity of underdoped ReBaCuO (Re = Y, Ho) single crystals is investigated. Reducing the oxygen content has been revealed to lead to the phase segregation accompanied by both, labile component diffusion and structural relaxation in the sample volume. Room-temperature storing of single crystals with different oxygen hypostoichiometries leads to a substantial widening of the rectilinear segment in in conjunction with a narrowing of the temperature range of existence of the pseudogap state. It is established that the excess conductivity obeys an exponential law in a broad temperature range, while the pseudogap's temperature dependence is described satisfactory in the framework of the BCS-BEC crossover theory. Substituting yttrium with holmium essentially effects the charge distribution and the effective interaction in CuO planes, thereby stimulating disordering processes in the oxygen subsystem. This is accompanied by a notable shift of the temperature zones corresponding to transitions of the metal-insulator type and to the regime of manifestation of the pseudogap anomaly.

  15. Quantifying short-lived events in multistate ionic current measurements.

    PubMed

    Balijepalli, Arvind; Ettedgui, Jessica; Cornio, Andrew T; Robertson, Joseph W F; Cheung, Kin P; Kasianowicz, John J; Vaz, Canute

    2014-02-25

    We developed a generalized technique to characterize polymer-nanopore interactions via single channel ionic current measurements. Physical interactions between analytes, such as DNA, proteins, or synthetic polymers, and a nanopore cause multiple discrete states in the current. We modeled the transitions of the current to individual states with an equivalent electrical circuit, which allowed us to describe the system response. This enabled the estimation of short-lived states that are presently not characterized by existing analysis techniques. Our approach considerably improves the range and resolution of single-molecule characterization with nanopores. For example, we characterized the residence times of synthetic polymers that are three times shorter than those estimated with existing algorithms. Because the molecule's residence time follows an exponential distribution, we recover nearly 20-fold more events per unit time that can be used for analysis. Furthermore, the measurement range was extended from 11 monomers to as few as 8. Finally, we applied this technique to recover a known sequence of single-stranded DNA from previously published ion channel recordings, identifying discrete current states with subpicoampere resolution.

  16. Intra-Individual Response Variability Assessed by Ex-Gaussian Analysis may be a New Endophenotype for Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco

    2014-01-01

    Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.

  17. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  18. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  19. Time Correlations in Mode Hopping of Coupled Oscillators

    NASA Astrophysics Data System (ADS)

    Heltberg, Mathias L.; Krishna, Sandeep; Jensen, Mogens H.

    2017-05-01

    We study the dynamics in a system of coupled oscillators when Arnold Tongues overlap. By varying the initial conditions, the deterministic system can be attracted to different limit cycles. Adding noise, the mode hopping between different states become a dominating part of the dynamics. We simplify the system through a Poincare section, and derive a 1D model to describe the dynamics. We explain that for some parameter values of the external oscillator, the time distribution of occupancy in a state is exponential and thus memoryless. In the general case, on the other hand, it is a sum of exponential distributions characteristic of a system with time correlations.

  20. Exponential Stability of Almost Periodic Solutions for Memristor-Based Neural Networks with Distributed Leakage Delays.

    PubMed

    Xu, Changjin; Li, Peiluan; Pang, Yicheng

    2016-12-01

    In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).

  1. The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Lin, Yangzheng; Fichthorn, Kristen A.

    2017-10-01

    We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.

  2. Image reconstruction from cone-beam projections with attenuation correction

    NASA Astrophysics Data System (ADS)

    Weng, Yi

    1997-07-01

    In single photon emission computered tomography (SPECT) imaging, photon attenuation within the body is a major factor contributing to the quantitative inaccuracy in measuring the distribution of radioactivity. Cone-beam SPECT provides improved sensitivity for imaging small organs. This thesis extends the results for 2D parallel- beam and fan-beam geometry to 3D parallel-beam and cone- beam geometries in order to derive filtered backprojection reconstruction algorithms for the 3D exponential parallel-beam transform and for the exponential cone-beam transform with sampling on a sphere. An exact inversion formula for the 3D exponential parallel-beam transform is obtained and is extended to the 3D exponential cone-beam transform. Sampling on a sphere is not useful clinically and current cone-beam tomography, with the focal point traversing a planar orbit, does not acquire sufficient data to give an accurate reconstruction. Thus a data acquisition method that obtains complete data for cone-beam SPECT by simultaneously rotating the gamma camera and translating the patient bed, so that cone-beam projections can be obtained with the focal point traversing a helix that surrounds the patient was developed. First, an implementation of Grangeat's algorithm for helical cone- beam projections was developed without attenuation correction. A fast new rebinning scheme was developed that uses all of the detected data to reconstruct the image and properly normalizes any multiply scanned data. In the case of attenuation no theorem analogous to Tuy's has been proven. We hypothesized that an artifact-free reconstruction could be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. Cone-beam emission data were acquired by using a circle- and-line and a helix orbit on a clinical SPECT system. An iterative conjugate gradient reconstruction algorithm was used to reconstruct projection data with a known attenuation map. The quantitative accuracy of the attenuation-corrected emission reconstruction was significantly improved.

  3. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models

    PubMed Central

    Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.

    2017-01-01

    Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161

  4. Diffusion weighted imaging in patients with rectal cancer: Comparison between Gaussian and non-Gaussian models.

    PubMed

    Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos

    2017-01-01

    The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.

  5. Average BER of subcarrier intensity modulated free space optical systems over the exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang

    2014-08-25

    The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.

  6. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  7. Quantum cryptography with finite resources: unconditional security bound for discrete-variable protocols with one-way postprocessing.

    PubMed

    Scarani, Valerio; Renner, Renato

    2008-05-23

    We derive a bound for the security of quantum key distribution with finite resources under one-way postprocessing, based on a definition of security that is composable and has an operational meaning. While our proof relies on the assumption of collective attacks, unconditional security follows immediately for standard protocols such as Bennett-Brassard 1984 and six-states protocol. For single-qubit implementations of such protocols, we find that the secret key rate becomes positive when at least N approximately 10(5) signals are exchanged and processed. For any other discrete-variable protocol, unconditional security can be obtained using the exponential de Finetti theorem, but the additional overhead leads to very pessimistic estimates.

  8. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  9. Determination of the functioning parameters in asymmetrical flow field-flow fractionation with an exponential channel.

    PubMed

    Déjardin, P

    2013-08-30

    The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A.; Jarvis, S.; Hewitt, N.

    2015-01-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end use. With respect to energy, growth has been near exponential for the last 160 years. We attempt to show that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near optimal directed networks. If so, the distribution efficiencies of these networks must decline as they expand due to path lengths becoming longer and more tortuous. To maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system: namely at the points of acquisition and end use. We postulate that the maintenance of growth at the specific rate of ~2.4% yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  11. Plume characteristics of MPD thrusters: A preliminary examination

    NASA Technical Reports Server (NTRS)

    Myers, Roger M.

    1989-01-01

    A diagnostics facility for MPD thruster plume measurements was built and is currently undergoing testing. The facility includes electrostatic probes for electron temperature and density measurements, Hall probes for magnetic field and current distribution mapping, and an imaging system to establish the global distribution of plasma species. Preliminary results for MPD thrusters operated at power levels between 30 and 60 kW with solenoidal applied magnetic fields show that the electron density decreases exponentially from 1x10(2) to 2x10(18)/cu m over the first 30 cm of the expansion, while the electron temperature distribution is relatively uniform, decreasing from approximately 2.5 eV to 1.5 eV over the same distance. The radiant intensity of the ArII 4879 A line emission also decays exponentially. Current distribution measurements indicate that a significant fraction of the discharge current is blown into the plume region, and that its distribution depends on the magnitudes of both the discharge current and the applied magnetic field.

  12. Optical absorption, TL and IRSL of basic plagioclase megacrysts from the pinacate (Sonora, Mexico) quaternary alkalic volcanics.

    PubMed

    Chernov, V; Paz-Moreno, F; Piters, T M; Barboza-Flores, M

    2006-01-01

    The paper presents the first results of an investigation on optical absorption (OA), thermally and infrared stimulated luminescence (TL and IRSL) of the Pinacate plagioclase (labradorite). The OA spectra reveal two bands with maxima at 1.0 and 3.2 eV connected with absorption of the Fe3+ and Fe2+ and IR absorption at wavelengths longer than 2700 nm. The ultraviolet absorption varies exponentially with the photon energy following the 'vitreous' empirical Urbach rule indicating exponential distribution of localised states in the forbidden band. The natural TL is peaked at 700 K. Laboratory beta irradiation creates a very broad TL peak with maximum at 430 K. The change of the 430 K TL peak shape under the thermal cleaning procedure and dark storage after irradiation reveals a monotonous increasing of the activation energy that can be explained by the exponential distribution of traps. The IRSL response is weak and exhibits a typical decay behaviour.

  13. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  14. Determination of bulk and interface density of states in metal oxide semiconductor thin-film transistors by using capacitance-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai

    2017-10-01

    A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.

  15. Distinguishing response conflict and task conflict in the Stroop task: evidence from ex-Gaussian distribution analysis.

    PubMed

    Steinhauser, Marco; Hübner, Ronald

    2009-10-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  16. A Single-Level Tunnel Model to Account for Electrical Transport through Single Molecule- and Self-Assembled Monolayer-based Junctions

    PubMed Central

    Garrigues, Alvar R.; Yuan, Li; Wang, Lejia; Mucciolo, Eduardo R.; Thompon, Damien; del Barco, Enrique; Nijhuis, Christian A.

    2016-01-01

    We present a theoretical analysis aimed at understanding electrical conduction in molecular tunnel junctions. We focus on discussing the validity of coherent versus incoherent theoretical formulations for single-level tunneling to explain experimental results obtained under a wide range of experimental conditions, including measurements in individual molecules connecting the leads of electromigrated single-electron transistors and junctions of self-assembled monolayers (SAM) of molecules sandwiched between two macroscopic contacts. We show that the restriction of transport through a single level in solid state junctions (no solvent) makes coherent and incoherent tunneling formalisms indistinguishable when only one level participates in transport. Similar to Marcus relaxation processes in wet electrochemistry, the thermal broadening of the Fermi distribution describing the electronic occupation energies in the electrodes accounts for the exponential dependence of the tunneling current on temperature. We demonstrate that a single-level tunnel model satisfactorily explains experimental results obtained in three different molecular junctions (both single-molecule and SAM-based) formed by ferrocene-based molecules. Among other things, we use the model to map the electrostatic potential profile in EGaIn-based SAM junctions in which the ferrocene unit is placed at different positions within the molecule, and we find that electrical screening gives rise to a strongly non-linear profile across the junction. PMID:27216489

  17. Exploiting the Adaptation Dynamics to Predict the Distribution of Beneficial Fitness Effects

    PubMed Central

    2016-01-01

    Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depends on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remains nearly a constant for exponentially decaying distributions and increases when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold good for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments. PMID:26990188

  18. Seamount statistics in the Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Smith, Deborah K.; Jordan, Thomas H.

    1988-04-01

    We apply the wide-beam sampling technique of Jordan et al. (1983) to approximately 157,000 km of wide-beam profiles to obtain seamount population statistics for eight regions in the eastern and southern Pacific Ocean. Population statistics derived from wide-beam echograms are compared with seamount counts from Sea Beam swaths and with counts from bathymetric maps. We find that the average number of seamounts with summit heights h ≥ H is well-approximated by the exponential frequency-size distribution: ν(H)=νoe-βH. The exponential model for seamount sizes, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model, which has no intrinsic scale, in describing the average distribution of Pacific seamounts, and it appears to be valid over a size spectrum spanning 5 orders of magnitude in abundance. Large-scale regional variations in seamount populations are documented. We observe significant differences in seamount densities across the Murray fracture zone in the North Pacific and the Eltanin fracture zone system in the South Pacific. The Eltanin discontinuity is equally evident on both sides of the Pacific-Antarctic ridge. In the South Pacific, regions symmetrically disposed about the ridge axis have very similar seamount densities, despite the large difference between Pacific plate and Antarctic plate absolute velocities; evidently, any differences in the shear flows at the base of the Pacific and Antarctic plates do not affect seamount emplacement. Systematic variations in νo and β are observed as a function of lithospheric age, with the number of large seamounts increasing more rapidly than small seamounts. These observations have been used to develop a simple model for seamount production under the assumptions that (1) an exponential size-frequency distribution is maintained, (2) production is steady state, and (3) most small seamounts are formed on or near the ridge axis. The limited data available from this study appear to be consistent with the model, but they are insufficient to provide a rigorous test of the assumptions or determine accurately the model parameters. However, the data from the South Pacific indicate that the off-axis production of large seamounts probably accounts for the majority of seamounts with summit heights greater than 1000 m.

  19. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  20. Motion of the two-control airplane in rectilinear flight after initial disturbances with introduction of controls following an exponential law

    NASA Technical Reports Server (NTRS)

    Klemin, Alexander

    1937-01-01

    An airplane in steady rectilinear flight was assumed to experience an initial disturbance in rolling or yawing velocity. The equations of motion were solved to see if it was possible to hasten recovery of a stable airplane or to secure recovery of an unstable airplane by the application of a single lateral control following an exponential law. The sample computations indicate that, for initial disturbances complex in character, it would be difficult to secure correlation with any type of exponential control. The possibility is visualized that the two-control operation may seriously impair the ability to hasten recovery or counteract instability.

  1. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  2. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    NASA Astrophysics Data System (ADS)

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-05-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3-4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting.

  3. Complex Degradation Processes Lead to Non-Exponential Decay Patterns and Age-Dependent Decay Rates of Messenger RNA

    PubMed Central

    Deneke, Carlus; Lipowsky, Reinhard; Valleriani, Angelo

    2013-01-01

    Experimental studies on mRNA stability have established several, qualitatively distinct decay patterns for the amount of mRNA within the living cell. Furthermore, a variety of different and complex biochemical pathways for mRNA degradation have been identified. The central aim of this paper is to bring together both the experimental evidence about the decay patterns and the biochemical knowledge about the multi-step nature of mRNA degradation in a coherent mathematical theory. We first introduce a mathematical relationship between the mRNA decay pattern and the lifetime distribution of individual mRNA molecules. This relationship reveals that the mRNA decay patterns at steady state expression level must obey a general convexity condition, which applies to any degradation mechanism. Next, we develop a theory, formulated as a Markov chain model, that recapitulates some aspects of the multi-step nature of mRNA degradation. We apply our theory to experimental data for yeast and explicitly derive the lifetime distribution of the corresponding mRNAs. Thereby, we show how to extract single-molecule properties of an mRNA, such as the age-dependent decay rate and the residual lifetime. Finally, we analyze the decay patterns of the whole translatome of yeast cells and show that yeast mRNAs can be grouped into three broad classes that exhibit three distinct decay patterns. This paper provides both a method to accurately analyze non-exponential mRNA decay patterns and a tool to validate different models of degradation using decay data. PMID:23408982

  4. Local spin dynamics at low temperature in the slowly relaxing molecular chain [Dy(hfac)3(NIT(C6H4OPh))]: A μ{sup +} spin relaxation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arosio, Paolo, E-mail: paolo.arosio@guest.unimi.it; Orsini, Francesco; Corti, Maurizio

    2015-05-07

    The spin dynamics of the molecular magnetic chain [Dy(hfac){sub 3}(NIT(C{sub 6}H{sub 4}OPh))] were investigated by means of the Muon Spin Relaxation (μ{sup +}SR) technique. This system consists of a magnetic lattice of alternating Dy(III) ions and radical spins, and exhibits single-chain-magnet behavior. The magnetic properties of [Dy(hfac){sub 3}(NIT(C{sub 6}H{sub 4}OPh))] have been studied by measuring the magnetization vs. temperature at different applied magnetic fields (H = 5, 3500, and 16500 Oe) and by performing μ{sup +}SR experiments vs. temperature in zero field and in a longitudinal applied magnetic field H = 3500 Oe. The muon asymmetry P(t) was fitted by the sum of three components, twomore » stretched-exponential decays with fast and intermediate relaxation times, and a third slow exponential decay. The temperature dependence of the spin dynamics has been determined by analyzing the muon longitudinal relaxation rate λ{sub interm}(T), associated with the intermediate relaxing component. The experimental λ{sub interm}(T) data were fitted with a corrected phenomenological Bloembergen-Purcell-Pound law by using a distribution of thermally activated correlation times, which average to τ = τ{sub 0} exp(Δ/k{sub B}T), corresponding to a distribution of energy barriers Δ. The correlation times can be associated with the spin freezing that occurs when the system condenses in the ground state.« less

  5. The modifier effects of chymotrypsin and trypsin enzymes on fluorescence lifetime distribution of "N-(1-pyrenyl)maleimide-bovine serum albumin" complex.

    PubMed

    Özyiğit, İbrahim Ethem; Karakuş, Emine; Pekcan, Önder

    2016-02-05

    Chymotrypsin and trypsin are the well known proteolytic enzymes, both of which are synthesized in the pancreas as their precursors - the inactive forms; chymotrypsinogen and trypsinogen - and then are released into the duodenum to cut proteins into smaller peptides. In this paper, the effects of activities of chymotrypsin and trypsin enzymes on fluorescence lifetime distributions of the substrat bovine serum albumin (BSA) modified with N-(1-pyrenyl)maleimide (PM) were examined. In the labeling study of BSA with PM, it is aimed to attach PM to the single free thiol (Cys34) and to all the free amine groups in accessible positions in order to produce excimers of pyrene planes of the possible highest amount to form the lifetime distributions in the widest range, that may show specifically distinguishing changes resulting from the activities of the proteases. The time resolved spectrofluorometer was used to monitor fluorescence decays, which were analyzed by using the exponential series method (ESM) to obtain the changes of lifetime distributions. After the exposure of the synthesized substrat PM-BSA to the enzymes, the fluorescence lifetime distributions exhibited different structures which were attributed to the different activities of the proteases. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  7. Empirical analysis of individual popularity and activity on an online music service system

    NASA Astrophysics Data System (ADS)

    Hu, Hai-Bo; Han, Ding-Yi

    2008-10-01

    Quantitative understanding of human behaviors supplies basic comprehension of the dynamics of many socio-economic systems. Based on the log data of an online music service system, we investigate the statistical characteristics of individual activity and popularity, and find that the distributions of both of them follow a stretched exponential form which interpolates between exponential and power law distribution. We also study the human dynamics on the online system and find that the distribution of interevent time between two consecutive listenings of music shows the fat tail feature. Besides, with the reduction of user activity the fat tail becomes more and more irregular, indicating different behavior patterns for users with diverse activities. The research results may shed some light on the in-depth understanding of collective behaviors in socio-economic systems.

  8. All-Optical Photoacoustic Sensors for Steel Rebar Corrosion Monitoring.

    PubMed

    Du, Cong; Owusu Twumasi, Jones; Tang, Qixiang; Guo, Xu; Zhou, Jingcheng; Yu, Tzuyang; Wang, Xingwei

    2018-04-27

    This article presents an application of an active all-optical photoacoustic sensing system with four elements for steel rebar corrosion monitoring. The sensor utilized a photoacoustic mechanism of gold nanocomposites to generate 8 MHz broadband ultrasound pulses in 0.4 mm compact space. A nanosecond 532 nm pulsed laser and 400 μm multimode fiber were employed to incite an ultrasound reaction. The fiber Bragg gratings were used as distributed ultrasound detectors. Accelerated corrosion testing was applied to four sections of a single steel rebar with four different corrosion degrees. Our results demonstrated that the mass loss of steel rebar displayed an exponential growth with ultrasound frequency shifts. The sensitivity of the sensing system was such that 0.175 MHz central frequency reduction corresponded to 0.02 g mass loss of steel rebar corrosion. It was proved that the all-optical photoacoustic sensing system can actively evaluate the corrosion of steel rebar via ultrasound spectrum. This multipoint all-optical photoacoustic method is promising for embedment into a concrete structure for distributed corrosion monitoring.

  9. Work statistics of charged noninteracting fermions in slowly changing magnetic fields.

    PubMed

    Yi, Juyeon; Talkner, Peter

    2011-04-01

    We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society

  10. Work statistics of charged noninteracting fermions in slowly changing magnetic fields

    NASA Astrophysics Data System (ADS)

    Yi, Juyeon; Talkner, Peter

    2011-04-01

    We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.

  11. The SiH + (A 1Π-X 1Sigma + ) emission produced from the thermal energy reaction of He + with SiH4 under single collision conditions

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Sumio; Tsuji, Masaharu; Obase, Hiroshi; Sekiya, Hiroshi; Nishimura, Yukio

    1987-05-01

    A flowing afterglow reactor has been coupled to a low-pressure chamber for an optical spectroscopic study of the charge-transfer reaction of He+ with SiH4 at thermal energy. The SiH+(A 1Π-X 1Σ+) emission was observed in the 380-610 nm region. The nascent vibrational and rotational distributions of SiH+(A) have been determined. The vibrational distribution for 0≤v'≤3 was approximately exponential with an effective vibrational temperature of 820±60 K. The rotational temperature decreased from 600 K for v'=0 to 300 K for v'=3. These data indicated that only about 3% of the excess energy is released as internal energy of SiH+(A). From the emission rate constant, SiH+(A) represents about 25% of the total SiH+ ion in the He++SiH4 reaction.

  12. A partial exponential lumped parameter model to evaluate groundwater age distributions and nitrate trends in long-screened wells

    USGS Publications Warehouse

    Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.

    2016-01-01

    A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.

  13. Improving Bed Management at Wright-Patterson Medical Center

    DTIC Science & Technology

    1989-09-01

    arrival distributions are Poisson, as in Sim2, then interarrival times are distributed exponentially (Budnick, Mcleavey , and Mojena, 1988:770). While... McLeavey , D. and Mojena R., Principles of Operations Research for Management (second edition). Homewood IL: Irwin, 1988. Cannoodt, L. J. and

  14. The dynamics of charge transfer with and without a barrier: A very simplified model of cyclic voltammetry.

    PubMed

    Ouyang, Wenjun; Subotnik, Joseph E

    2017-05-07

    Using the Anderson-Holstein model, we investigate charge transfer dynamics between a molecule and a metal surface for two extreme cases. (i) With a large barrier, we show that the dynamics follow a single exponential decay as expected; (ii) without any barrier, we show that the dynamics are more complicated. On the one hand, if the metal-molecule coupling is small, single exponential dynamics persist. On the other hand, when the coupling between the metal and the molecule is large, the dynamics follow a biexponential decay. We analyze the dynamics using the Smoluchowski equation, develop a simple model, and explore the consequences of biexponential dynamics for a hypothetical cyclic voltammetry experiment.

  15. Compact continuous-variable entanglement distillation.

    PubMed

    Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A

    2012-02-10

    We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.

  16. The study of autism as a distributed disorder

    PubMed Central

    Müller, Ralph-Axel

    2010-01-01

    Past autism research has often been dedicated to tracing the causes of the disorder to a localized neurological abnormality, a single functional network, or a single cognitive-behavioral domain. In this review, I argue that autism is a ‘distributed disorder’ on various levels of study (genetic, neuroanatomical, neurofunctional, behavioral). ‘Localizing’ models are therefore not promising. The large array of potential genetic risk factors suggests that multiple (or all) emerging functional brain networks are affected during early development. This is supported by widespread growth abnormalities throughout the brain. Interactions during development between affected functional networks and atypical experiential effects (associated with atypical behavior) in children with autism further complicate the neurological bases of the disorder, resulting in an ‘exponentially distributed’ profile. Promising approaches to a better characterization of neural endophenotypes in autism are provided by techniques investigating white matter and connectivity, such as MR spectroscopy, diffusion tensor imaging (DTI), and functional connectivity MRI. According to a recent hypothesis, the autistic brain is generally characterized by ‘underconnectivity’. However, not all findings are consistent with this view. The concepts and methodology of functional connectivity need to be refined and results need to be corroborated by anatomical studies (such as DTI tractography) before definitive conclusions can be drawn. PMID:17326118

  17. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  18. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    NASA Astrophysics Data System (ADS)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  19. Glacial refugia and recolonization pathways in the brown seaweed Fucus serratus.

    PubMed

    Hoarau, G; Coyer, J A; Veldsink, J H; Stam, W T; Olsen, J L

    2007-09-01

    The last glacial maximum (20,000-18,000 years ago) dramatically affected extant distributions of virtually all northern European biota. Locations of refugia and postglacial recolonization pathways were examined in Fucus serratus (Heterokontophyta; Fucaceae) using a highly variable intergenic spacer developed from the complete mitochondrial genome of Fucus vesiculosus. Over 1,500 samples from the entire range of F. serratus were analysed using fluorescent single strand conformation polymorphism. A total of 28 mtDNA haplotypes was identified and sequenced. Three refugia were recognized based on high haplotype diversities and the presence of endemic haplotypes: southwest Ireland, the northern Brittany-Hurd Deep area of the English Channel, and the northwest Iberian Peninsula. The Irish refugium was the source for a recolonization sweep involving a single haplotype via northern Scotland and throughout Scandinavia, whereas recolonization from the Brittany-Hurd Deep refugium was more limited, probably because of unsuitable soft-bottom habitat in the Bay of Biscay and along the Belgian and Dutch coasts. The Iberian populations reflect a remnant refugium at the present-day southern boundary of the species range. A generalized skyline plot suggested exponential population expansion beginning in the mid-Pleistocene with maximal growth during the Eems interglacial 128,000-67,000 years ago, implying that the last glacial maximum mainly shaped population distributions rather than demography.

  20. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  1. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    PubMed

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  2. The Comparison Study of Quadratic Infinite Beam Program on Optimization Instensity Modulated Radiation Therapy Treatment Planning (IMRTP) between Threshold and Exponential Scatter Method with CERR® In The Case of Lung Cancer

    NASA Astrophysics Data System (ADS)

    Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.

    2016-08-01

    This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.

  3. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  4. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  5. Recurrence plot statistics and the effect of embedding

    NASA Astrophysics Data System (ADS)

    March, T. K.; Chapman, S. C.; Dendy, R. O.

    2005-01-01

    Recurrence plots provide a graphical representation of the recurrent patterns in a timeseries, the quantification of which is a relatively new field. Here we derive analytical expressions which relate the values of key statistics, notably determinism and entropy of line length distribution, to the correlation sum as a function of embedding dimension. These expressions are obtained by deriving the transformation which generates an embedded recurrence plot from an unembedded plot. A single unembedded recurrence plot thus provides the statistics of all possible embedded recurrence plots. If the correlation sum scales exponentially with embedding dimension, we show that these statistics are determined entirely by the exponent of the exponential. This explains the results of Iwanski and Bradley [J.S. Iwanski, E. Bradley, Recurrence plots of experimental data: to embed or not to embed? Chaos 8 (1998) 861-871] who found that certain recurrence plot statistics are apparently invariant to embedding dimension for certain low-dimensional systems. We also examine the relationship between the mutual information content of two timeseries and the common recurrent structure seen in their recurrence plots. This allows time-localized contributions to mutual information to be visualized. This technique is demonstrated using geomagnetic index data; we show that the AU and AL geomagnetic indices share half their information, and find the timescale on which mutual features appear.

  6. A nonstationary Poisson point process describes the sequence of action potentials over long time scales in lateral-superior-olive auditory neurons.

    PubMed

    Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C

    1994-01-01

    The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue

    PubMed Central

    Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.

    2004-01-01

    The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455

  8. Statistics of opinion domains of the majority-vote model on a square lattice

    NASA Astrophysics Data System (ADS)

    Peres, Lucas R.; Fontanari, José F.

    2010-10-01

    The existence of juxtaposed regions of distinct cultures in spite of the fact that people’s beliefs have a tendency to become more similar to each other’s as the individuals interact repeatedly is a puzzling phenomenon in the social sciences. Here we study an extreme version of the frequency-dependent bias model of social influence in which an individual adopts the opinion shared by the majority of the members of its extended neighborhood, which includes the individual itself. This is a variant of the majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors’ opinions. We assume that the individuals are fixed in the sites of a square lattice of linear size L and that they interact with their nearest neighbors only. Within a mean-field framework, we derive the equations of motion for the density of individuals adopting a particular opinion in the single-site and pair approximations. Although the single-site approximation predicts a single opinion domain that takes over the entire lattice, the pair approximation yields a qualitatively correct picture with the coexistence of different opinion domains and a strong dependence on the initial conditions. Extensive Monte Carlo simulations indicate the existence of a rich distribution of opinion domains or clusters, the number of which grows with L2 whereas the size of the largest cluster grows with lnL2 . The analysis of the sizes of the opinion domains shows that they obey a power-law distribution for not too large sizes but that they are exponentially distributed in the limit of very large clusters. In addition, similarly to other well-known social influence model—Axelrod’s model—we found that these opinion domains are unstable to the effect of a thermal-like noise.

  9. Network structures sustained by internal links and distributed lifetime of old nodes in stationary state of number of nodes

    NASA Astrophysics Data System (ADS)

    Ikeda, Nobutoshi

    2017-12-01

    In network models that take into account growth properties, deletion of old nodes has a serious impact on degree distributions, because old nodes tend to become hub nodes. In this study, we aim to provide a simple explanation for why hubs can exist even in conditions where the number of nodes is stationary due to the deletion of old nodes. We show that an exponential increase in the degree of nodes is a natural consequence of the balance between the deletion and addition of nodes as long as a preferential attachment mechanism holds. As a result, the largest degree is determined by the magnitude relationship between the time scale of the exponential growth of degrees and lifetime of old nodes. The degree distribution exhibits a power-law form ˜ k -γ with exponent γ = 1 when the lifetime of nodes is constant. However, various values of γ can be realized by introducing distributed lifetime of nodes.

  10. The Modelled Raindrop Size Distribution of Skudai, Peninsular Malaysia, Using Exponential and Lognormal Distributions

    PubMed Central

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597

  11. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  12. Improved Results for Route Planning in Stochastic Transportation Networks

    NASA Technical Reports Server (NTRS)

    Boyan, Justin; Mitzenmacher, Michael

    2000-01-01

    In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.

  13. Coherent forward broadening in cold atom clouds

    NASA Astrophysics Data System (ADS)

    Sutherland, R. T.; Robicheaux, F.

    2016-02-01

    It is shown that homogeneous line-broadening in a diffuse cold atom cloud is proportional to the resonant optical depth of the cloud. Furthermore, it is demonstrated how the strong directionality of the coherent interactions causes the cloud's spectra to depend strongly on its shape, even when the cloud is held at constant densities. These two numerical observations can be predicted analytically by extending the single-photon wave-function model. Lastly, elongating a cloud along the line of laser propagation causes the excitation probability distribution to deviate from the exponential decay predicted by the Beer-Lambert law to the extent where the atoms at the back of the cloud are more excited than the atoms at the front. These calculations are conducted at the low densities relevant to recent experiments.

  14. Estimating Distances from Parallaxes. II. Performance of Bayesian Distance Estimators on a Gaia-like Catalogue

    NASA Astrophysics Data System (ADS)

    Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.

    2016-12-01

    Estimating a distance by inverting a parallax is only valid in the absence of noise. As most stars in the Gaia catalog will have non-negligible fractional parallax errors, we must treat distance estimation as a constrained inference problem. Here we investigate the performance of various priors for estimating distances, using a simulated Gaia catalog of one billion stars. We use three minimalist, isotropic priors, as well an anisotropic prior derived from the observability of stars in a Milky Way model. The two priors that assume a uniform distribution of stars—either in distance or in space density—give poor results: The root mean square fractional distance error, {f}{rms}, grows far in excess of 100% once the fractional parallax error, {f}{true}, is larger than 0.1. A prior assuming an exponentially decreasing space density with increasing distance performs well once its single parameter—the scale length— has been set to an appropriate value: {f}{rms} is roughly equal to {f}{true} for {f}{true}\\lt 0.4, yet does not increase further as {f}{true} increases up to to 1.0. The Milky Way prior performs well except toward the Galactic center, due to a mismatch with the (simulated) data. Such mismatches will be inevitable (and remain unknown) in real applications, and can produce large errors. We therefore suggest adopting the simpler exponentially decreasing space density prior, which is also less time-consuming to compute. Including Gaia photometry improves the distance estimation significantly for both the Milky Way and exponentially decreasing space density prior, yet doing so requires additional assumptions about the physical nature of stars.

  15. A calibration method for patient specific IMRT QA using a single therapy verification film

    PubMed Central

    Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.

    2013-01-01

    Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly accurate IMRT verification. PMID:24416558

  16. Research on the exponential growth effect on network topology: Theoretical and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; You, Zongjun

    Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.

  17. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  18. A mathematical model for generating bipartite graphs and its application to protein networks

    NASA Astrophysics Data System (ADS)

    Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.

    2009-12-01

    Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.

  19. Analysis and modeling of optical crosstalk in InP-based Geiger-mode avalanche photodiode FPAs

    NASA Astrophysics Data System (ADS)

    Chau, Quan; Jiang, Xudong; Itzler, Mark A.; Entwistle, Mark; Piccione, Brian; Owens, Mark; Slomkowski, Krystyna

    2015-05-01

    Optical crosstalk is a major factor limiting the performance of Geiger-mode avalanche photodiode (GmAPD) focal plane arrays (FPAs). This is especially true for arrays with increased pixel density and broader spectral operation. We have performed extensive experimental and theoretical investigations on the crosstalk effects in InP-based GmAPD FPAs for both 1.06-μm and 1.55-μm applications. Mechanisms responsible for intrinsic dark counts are Poisson processes, and their inter-arrival time distribution is an exponential function. In FPAs, intrinsic dark counts and cross talk events coexist, and the inter-arrival time distribution deviates from purely exponential behavior. From both experimental data and computer simulations, we show the dependence of this deviation on the crosstalk probability. The spatial characteristics of crosstalk are also demonstrated. From the temporal and spatial distribution of crosstalk, an efficient algorithm to identify and quantify crosstalk is introduced.

  20. Preferential attachment and growth dynamics in complex systems

    NASA Astrophysics Data System (ADS)

    Yamasaki, Kazuko; Matia, Kaushik; Buldyrev, Sergey V.; Fu, Dongfeng; Pammolli, Fabio; Riccaboni, Massimo; Stanley, H. Eugene

    2006-09-01

    Complex systems can be characterized by classes of equivalency of their elements defined according to system specific rules. We propose a generalized preferential attachment model to describe the class size distribution. The model postulates preferential growth of the existing classes and the steady influx of new classes. According to the model, the distribution changes from a pure exponential form for zero influx of new classes to a power law with an exponential cut-off form when the influx of new classes is substantial. Predictions of the model are tested through the analysis of a unique industrial database, which covers both elementary units (products) and classes (markets, firms) in a given industry (pharmaceuticals), covering the entire size distribution. The model’s predictions are in good agreement with the data. The paper sheds light on the emergence of the exponent τ≈2 observed as a universal feature of many biological, social and economic problems.

  1. Distributed Consensus of Stochastic Delayed Multi-agent Systems Under Asynchronous Switching.

    PubMed

    Wu, Xiaotai; Tang, Yang; Cao, Jinde; Zhang, Wenbing

    2016-08-01

    In this paper, the distributed exponential consensus of stochastic delayed multi-agent systems with nonlinear dynamics is investigated under asynchronous switching. The asynchronous switching considered here is to account for the time of identifying the active modes of multi-agent systems. After receipt of confirmation of mode's switching, the matched controller can be applied, which means that the switching time of the matched controller in each node usually lags behind that of system switching. In order to handle the coexistence of switched signals and stochastic disturbances, a comparison principle of stochastic switched delayed systems is first proved. By means of this extended comparison principle, several easy to verified conditions for the existence of an asynchronously switched distributed controller are derived such that stochastic delayed multi-agent systems with asynchronous switching and nonlinear dynamics can achieve global exponential consensus. Two examples are given to illustrate the effectiveness of the proposed method.

  2. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  3. Exponential Family Functional data analysis via a low-rank model.

    PubMed

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  4. QMRA for Drinking Water: 1. Revisiting the Mathematical Structure of Single-Hit Dose-Response Models.

    PubMed

    Nilsen, Vegard; Wyller, John

    2016-01-01

    Dose-response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi-mechanistic models known as single-hit models, such as the exponential and the exact beta-Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single-hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so-called single-hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single-hit models. Further analysis of the model framework is facilitated by formulating the single-hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single-hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model-consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model-consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model-consistent expression for the mean per-exposure dose that produces the correct total risk from repeated exposures is developed. © 2016 Society for Risk Analysis.

  5. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  6. Pore‐Scale Hydrodynamics in a Progressively Bioclogged Three‐Dimensional Porous Medium: 3‐D Particle Tracking Experiments and Stochastic Transport Modeling

    PubMed Central

    Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.

    2018-01-01

    Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184

  7. Pore-Scale Hydrodynamics in a Progressively Bioclogged Three-Dimensional Porous Medium: 3-D Particle Tracking Experiments and Stochastic Transport Modeling

    NASA Astrophysics Data System (ADS)

    Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2018-03-01

    Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.

  8. Evolution and mass extinctions as lognormal stochastic processes

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2014-10-01

    In a series of recent papers and in a book, this author put forward a mathematical model capable of embracing the search for extra-terrestrial intelligence (SETI), Darwinian Evolution and Human History into a single, unified statistical picture, concisely called Evo-SETI. The relevant mathematical tools are: (1) Geometric Brownian motion (GBM), the stochastic process representing evolution as the stochastic increase of the number of species living on Earth over the last 3.5 billion years. This GBM is well known in the mathematics of finances (Black-Sholes models). Its main features are that its probability density function (pdf) is a lognormal pdf, and its mean value is either an increasing or, more rarely, decreasing exponential function of the time. (2) The probability distributions known as b-lognormals, i.e. lognormals starting at a certain positive instant b>0 rather than at the origin. These b-lognormals were then forced by us to have their peak value located on the exponential mean-value curve of the GBM (Peak-Locus theorem). In the framework of Darwinian Evolution, the resulting mathematical construction was shown to be what evolutionary biologists call Cladistics. (3) The (Shannon) entropy of such b-lognormals is then seen to represent the `degree of progress' reached by each living organism or by each big set of living organisms, like historic human civilizations. Having understood this fact, human history may then be cast into the language of b-lognormals that are more and more organized in time (i.e. having smaller and smaller entropy, or smaller and smaller `chaos'), and have their peaks on the increasing GBM exponential. This exponential is thus the `trend of progress' in human history. (4) All these results also match with SETI in that the statistical Drake equation (generalization of the ordinary Drake equation to encompass statistics) leads just to the lognormal distribution as the probability distribution for the number of extra-terrestrial civilizations existing in the Galaxy (as a consequence of the central limit theorem of statistics). (5) But the most striking new result is that the well-known `Molecular Clock of Evolution', namely the `constant rate of Evolution at the molecular level' as shown by Kimura's Neutral Theory of Molecular Evolution, identifies with growth rate of the entropy of our Evo-SETI model, because they both grew linearly in time since the origin of life. (6) Furthermore, we apply our Evo-SETI model to lognormal stochastic processes other than GBMs. For instance, we provide two models for the mass extinctions that occurred in the past: (a) one based on GBMs and (b) the other based on a parabolic mean value capable of covering both the extinction and the subsequent recovery of life forms. (7) Finally, we show that the Markov & Korotayev (2007, 2008) model for Darwinian Evolution identifies with an Evo-SETI model for which the mean value of the underlying lognormal stochastic process is a cubic function of the time. In conclusion: we have provided a new mathematical model capable of embracing molecular evolution, SETI and entropy into a simple set of statistical equations based upon b-lognormals and lognormal stochastic processes with arbitrary mean, of which the GBMs are the particular case of exponential growth.

  9. Cross diffusion and exponential space dependent heat source impacts in radiated three-dimensional (3D) flow of Casson fluid by heated surface

    NASA Astrophysics Data System (ADS)

    Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.

    2018-03-01

    This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.

  10. Global exponential stability and lag synchronization for delayed memristive fuzzy Cohen-Grossberg BAM neural networks with impulses.

    PubMed

    Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar

    2018-02-01

    This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times

    NASA Astrophysics Data System (ADS)

    Fa, Kwok Sau

    2012-09-01

    In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.

  12. Strain, curvature, and twist measurements in digital holographic interferometry using pseudo-Wigner-Ville distribution based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2009-09-15

    Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less

  13. Two-state Markov-chain Poisson nature of individual cellphone call statistics

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Zhou, Wei-Xing; Sornette, Didier

    2016-07-01

    Unfolding the burst patterns in human activities and social interactions is a very important issue especially for understanding the spreading of disease and information and the formation of groups and organizations. Here, we conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73 339 anonymous cellphone users, whose inter-call durations are Weibull distributed. We find that the individual call events exhibit a pattern of bursts, that high activity periods are alternated with low activity periods. In both periods, the number of calls are exponentially distributed for individuals, but power-law distributed for the population. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain, giving significant fits for nearly half of the individuals. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we purport the existence of power-law distributions, via the ‘superposition of distributions’ mechanism. Our findings shed light on the origins of bursty patterns in other human activities.

  14. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  15. Heterogeneous Link Weight Promotes the Cooperation in Spatial Prisoner's Dilemma

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Qin; Xia, Cheng-Yi; Sun, Shi-Wen; Wang, Li; Wang, Huai-Bin; Wang, Juan

    The spatial structure has often been identified as a prominent mechanism that substantially promotes the cooperation level in prisoner's dilemma game. In this paper we introduce a weighting mechanism into the spatial prisoner's dilemma game to explore the cooperative behaviors on the square lattice. Here, three types of weight distributions: exponential, power-law and uniform distributions are considered, and the weight is assigned to links between players. Through large-scale numerical simulations we find, compared with the traditional spatial game, that this mechanism can largely enhance the frequency of cooperators. For most ranges of b, we find that the power-law distribution enables the highest promotion of cooperation and the uniform one leads to the lowest enhancement, whereas the exponential one lies often between them. The great improvement of cooperation can be caused by the fact that the distributional link weight yields inhomogeneous interaction strength among individuals, which can facilitate the formation of cooperative clusters to resist the defector's invasion. In addition, the impact of amplitude of the undulation of weight distribution and noise strength on cooperation is also investigated for three kinds of weight distribution. Current researches can aid in the further understanding of evolutionary cooperation in biological and social science.

  16. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  17. TracerLPM (Version 1): An Excel® workbook for interpreting groundwater age distributions from environmental tracer data

    USGS Publications Warehouse

    Jurgens, Bryant C.; Böhlke, J.K.; Eberts, Sandra M.

    2012-01-01

    TracerLPM is an interactive Excel® (2007 or later) workbook program for evaluating groundwater age distributions from environmental tracer data by using lumped parameter models (LPMs). Lumped parameter models are mathematical models of transport based on simplified aquifer geometry and flow configurations that account for effects of hydrodynamic dispersion or mixing within the aquifer, well bore, or discharge area. Five primary LPMs are included in the workbook: piston-flow model (PFM), exponential mixing model (EMM), exponential piston-flow model (EPM), partial exponential model (PEM), and dispersion model (DM). Binary mixing models (BMM) can be created by combining primary LPMs in various combinations. Travel time through the unsaturated zone can be included as an additional parameter. TracerLPM also allows users to enter age distributions determined from other methods, such as particle tracking results from numerical groundwater-flow models or from other LPMs not included in this program. Tracers of both young groundwater (anthropogenic atmospheric gases and isotopic substances indicating post-1940s recharge) and much older groundwater (carbon-14 and helium-4) can be interpreted simultaneously so that estimates of the groundwater age distribution for samples with a wide range of ages can be constrained. TracerLPM is organized to permit a comprehensive interpretive approach consisting of hydrogeologic conceptualization, visual examination of data and models, and best-fit parameter estimation. Groundwater age distributions can be evaluated by comparing measured and modeled tracer concentrations in two ways: (1) multiple tracers analyzed simultaneously can be evaluated against each other for concordance with modeled concentrations (tracer-tracer application) or (2) tracer time-series data can be evaluated for concordance with modeled trends (tracer-time application). Groundwater-age estimates can also be obtained for samples with a single tracer measurement at one point in time; however, prior knowledge of an appropriate LPM is required because the mean age is often non-unique. LPM output concentrations depend on model parameters and sample date. All of the LPMs have a parameter for mean age. The EPM, PEM, and DM have an additional parameter that characterizes the degree of age mixing in the sample. BMMs have a parameter for the fraction of the first component in the mixture. An LPM, together with its parameter values, provides a description of the age distribution or the fractional contribution of water for every age of recharge contained within a sample. For the PFM, the age distribution is a unit pulse at one distinct age. For the other LPMs, the age distribution can be much broader and span decades, centuries, millennia, or more. For a sample with a mixture of groundwater ages, the reported interpretation of tracer data includes the LPM name, the mean age, and the values of any other independent model parameters. TracerLPM also can be used for simulating the responses of wells, springs, streams, or other groundwater discharge receptors to nonpoint-source contaminants that are introduced in recharge, such as nitrate. This is done by combining an LPM or user-defined age distribution with information on contaminant loading at the water table. Information on historic contaminant loading can be used to help evaluate a model's ability to match real world conditions and understand observed contaminant trends, while information on future contaminant loading scenarios can be used to forecast potential contaminant trends.

  18. Application of a deconvolution method for identifying burst amplitudes and arrival times in Alcator C-Mod far SOL plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, Audun; Garcia, Odd Erik; Kube, Ralph; Labombard, Brian; Terry, Jim

    2017-10-01

    In the far scrape-off layer (SOL), radial motion of filamentary structures leads to excess transport of particles and heat. Amplitudes and arrival times of these filaments have previously been studied by conditional averaging in single-point measurements from Langmuir Probes and Gas Puff Imaging (GPI). Conditional averaging can be problematic: the cutoff for large amplitudes is mostly chosen by convention; the conditional windows used may influence the arrival time distribution; and the amplitudes cannot be separated from a background. Previous work has shown that SOL fluctuations are well described by a stochastic model consisting of a super-position of pulses with fixed shape and randomly distributed amplitudes and arrival times. The model can be formulated as a pulse shape convolved with a train of delta pulses. By choosing a pulse shape consistent with the power spectrum of the fluctuation time series, Richardson-Lucy deconvolution can be used to recover the underlying amplitudes and arrival times of the delta pulses. We apply this technique to both L and H-mode GPI data from the Alcator C-Mod tokamak. The pulse arrival times are shown to be uncorrelated and uniformly distributed, consistent with a Poisson process, and the amplitude distribution has an exponential tail.

  19. Effect of reaction-step-size noise on the switching dynamics of stochastic populations

    NASA Astrophysics Data System (ADS)

    Be'er, Shay; Heller-Algazi, Metar; Assaf, Michael

    2016-05-01

    In genetic circuits, when the messenger RNA lifetime is short compared to the cell cycle, proteins are produced in geometrically distributed bursts, which greatly affects the cellular switching dynamics between different metastable phenotypic states. Motivated by this scenario, we study a general problem of switching or escape in stochastic populations, where influx of particles occurs in groups or bursts, sampled from an arbitrary distribution. The fact that the step size of the influx reaction is a priori unknown and, in general, may fluctuate in time with a given correlation time and statistics, introduces an additional nondemographic reaction-step-size noise into the system. Employing the probability-generating function technique in conjunction with Hamiltonian formulation, we are able to map the problem in the leading order onto solving a stationary Hamilton-Jacobi equation. We show that compared to the "usual case" of single-step influx, bursty influx exponentially decreases the population's mean escape time from its long-lived metastable state. In particular, close to bifurcation we find a simple analytical expression for the mean escape time which solely depends on the mean and variance of the burst-size distribution. Our results are demonstrated on several realistic distributions and compare well with numerical Monte Carlo simulations.

  20. Coupling DAEM and CFD for simulating biomass fast pyrolysis in fluidized beds

    DOE PAGES

    Xiong, Qingang; Zhang, Jingchao; Wiggins, Gavin; ...

    2015-12-03

    We report results from computational simulations of an experimental, lab-scale bubbling bed biomass pyrolysis reactor that include a distributed activation energy model (DAEM) for the kinetics. In this study, we utilized multiphase computational fluid dynamics (CFD) to account for the turbulent hydrodynamics, and this was combined with the DAEM kinetics in a multi-component, multi-step reaction network. Our results indicate that it is possible to numerically integrate the coupled CFD–DAEM system without significantly increasing computational overhead. It is also clear, however, that reactor operating conditions, reaction kinetics, and multiphase flow dynamics all have major impacts on the pyrolysis products exiting themore » reactor. We find that, with the same pre-exponential factors and mean activation energies, inclusion of distributed activation energies in the kinetics can shift the predicted average value of the exit vapor-phase tar flux and its statistical distribution, compared to single-valued activation-energy kinetics. Perhaps the most interesting observed trend is that increasing the diversity of the DAEM activation energies appears to increase the mean tar yield, all else being equal. As a result, these findings imply that accurate resolution of the reaction activation energy distributions will be important for optimizing biomass pyrolysis processes.« less

  1. System Lifetimes, The Memoryless Property, Euler's Constant, and Pi

    ERIC Educational Resources Information Center

    Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon

    2013-01-01

    A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…

  2. Effects of communication burstiness on consensus formation and tipping points in social dynamics

    NASA Astrophysics Data System (ADS)

    Doyle, C.; Szymanski, B. K.; Korniss, G.

    2017-06-01

    Current models for opinion dynamics typically utilize a Poisson process for speaker selection, making the waiting time between events exponentially distributed. Human interaction tends to be bursty though, having higher probabilities of either extremely short waiting times or long periods of silence. To quantify the burstiness effects on the dynamics of social models, we place in competition two groups exhibiting different speakers' waiting-time distributions. These competitions are implemented in the binary naming game and show that the relevant aspect of the waiting-time distribution is the density of the head rather than that of the tail. We show that even with identical mean waiting times, a group with a higher density of short waiting times is favored in competition over the other group. This effect remains in the presence of nodes holding a single opinion that never changes, as the fraction of such committed individuals necessary for achieving consensus decreases dramatically when they have a higher head density than the holders of the competing opinion. Finally, to quantify differences in burstiness, we introduce the expected number of small-time activations and use it to characterize the early-time regime of the system.

  3. Comparison of the open-close kinetics of the cloned inward rectifier K+ channel IRK1 and its point mutant (Q140E) in the pore region.

    PubMed

    Guo, L; Kubo, Y

    1998-01-01

    To test whether a single amino-acid residue at the center of pore region can dictate the difference of open-close kinetics in a steady-state at hyperpolarized potentials among members of the inward K+ channel family, the Q140E mutant of the inward rectifier K+ channel (IRK1) was made and its gating properties were compared with those of IRK1 wild type (Wt) in Xenopus oocytes. The distinct differences were observed only at the single channel level. The open time constant of mutant tau(o)(Q140E) at -80 mV was over ten-fold shorter than that of Wt tau(o)(Wt); in Wt, the closed time distribution was fitted with a sum of two exponentials (c-slow and c-fast), whereas it could be fitted with three exponentials (c-slow, c-fast, and additional c-extrafast) in Q140E. However, the time constant of burst duration of mutant tau(b)(Q140E) was close to tau(o)(Wt) and both showed a similarly strong voltage dependence, and a high sensitivity to pH0 in the absence of Mg02+, indicating that tau(b)(Q140E) is closely related to tau(o)(Wt). These results demonstrated that Q140E shortened the channel openings by acquiring an extra-fast closing state. From the analysis of the effects of cations on both Wt and Q140E, it was suggested that the transition from the open state to this extra-fast closing state was not due to the block by H+ or Mg2+ but possibly by extracellular K+.

  4. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  5. OSSOS. II. A SHARP TRANSITION IN THE ABSOLUTE MAGNITUDE DISTRIBUTION OF THE KUIPER BELT’S SCATTERING POPULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shankman, C.; Kavelaars, JJ.; Bannister, M. T.

    We measure the absolute magnitude, H, distribution, dN(H) ∝ 10{sup αH}, of the scattering Trans-Neptunian Objects (TNOs) as a proxy for their size-frequency distribution. We show that the H-distribution of the scattering TNOs is not consistent with a single-slope distribution, but must transition around H{sub g} ∼ 9 to either a knee with a shallow slope or to a divot, which is a differential drop followed by second exponential distribution. Our analysis is based on a sample of 22 scattering TNOs drawn from three different TNO surveys—the Canada–France Ecliptic Plane Survey, Alexandersen et al., and the Outer Solar System Origins Survey, all of whichmore » provide well-characterized detection thresholds—combined with a cosmogonic model for the formation of the scattering TNO population. Our measured absolute magnitude distribution result is independent of the choice of cosmogonic model. Based on our analysis, we estimate that the number of scattering TNOs is (2.4–8.3) × 10{sup 5} for H{sub r} < 12. A divot H-distribution is seen in a variety of formation scenarios and may explain several puzzles in Kuiper Belt science. We find that a divot H-distribution simultaneously explains the observed scattering TNO, Neptune Trojan, Plutino, and Centaur H-distributions while simultaneously predicting a large enough scattering TNO population to act as the sole supply of the Jupiter-Family Comets.« less

  6. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  7. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  8. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.

  9. Calculating Formulas of Coefficient and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, F. H.; Zhou, G. D.; Ma, K.; Ma, W. J.; Cui, W. Y.; Zhang, B.

    2015-11-01

    Present studies have shown that, in the main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the distributions of neutron exposures in the nucleosynthesis regions can all be expressed by an exponential function ({ρ_{AGB}}(τ) = C/{τ_0}exp ( - τ/{τ_0})) in the effective range of values. However, the specific expressions of the proportional coefficient C and the mean neutron exposure ({τ_0}) in the formula for different models are not completely determined in the related literatures. Through dissecting the basic solving method of the exponential distribution of neutron exposures, and systematically combing the solution procedure of exposure distribution for different stellar models, the general calculating formulas as well as their auxiliary equations for calculating C and ({τ_0}) are reduced. Given the discrete distribution of neutron exposures ({P_k}), i.e. the mass ratio of the materials which have exposed to neutrons for (k) ((k = 0, 1, 2 \\cdots )) times when reaching the final distribution with respect to the materials of the He intershell, (C = - {P_1}/ln R), and ({τ_0} = - Δ τ /ln R) can be obtained. Here, (R) expresses the probability that the materials can successively experience neutron irradiation for two times in the He intershell. For the convective nucleosynthesis model (including the Ulrich model and the ({}^{13}{C})-pocket convective burning model), (R) is just the overlap factor r, namely the mass ratio of the materials which can undergo two successive thermal pulses in the He intershell. And for the (^{13}{C})-pocket radiative burning model, (R = sumlimits_{k = 1}^∞ {{P_k}} ). This set of formulas practically give the corresponding relationship between C or ({τ_0}) and the model parameters. The results of this study effectively solve the problem of analytically calculating the distribution of neutron exposures in the low-mass AGB star s-process nucleosynthesis model of (^{13}{C})-pocket radiative burning.

  10. Pharmacokinetics of lidocaine and bupivacaine following subarachnoid administration in surgical patients: simultaneous investigation of absorption and disposition kinetics using stable isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burm, A.G.; Van Kleef, J.W.; Vermeulen, N.P.

    1988-10-01

    The pharmacokinetics of lidocaine and bupivacaine following subarachnoid administration were studied in 12 surgical patients using a stable isotope method. After subarachnoid administration of the agent to be evaluated, a deuterium-labelled analogue was administered intravenously. Blood samples were collected for 24 h. Plasma concentrations of the unlabelled and the deuterium-labelled local anesthetics were determined using a combination of capillary gas chromatography and mass fragmentography. Bi-exponential functions were fitted to the plasma concentration-time data of the deuterium-labelled local anesthetics. The progression of the absorption was evaluated using deconvolution. Mono- and bi-exponential functions were then fitted to the fraction absorbed versus timemore » data. The distribution and elimination half-lives of the deuterium-labelled analogues were 25 +/- 13 min (mean +/- SD) and 121 +/- 31 min for lidocaine and 19 +/- 10 min and 131 +/- 33 min for bupivacaine. The volumes of the central compartment and steady-state volumes of distribution were: lidocaine 57 +/- 10 l and 105 +/- 25 l, bupivacaine 25 +/- 6 l and 63 +/- 22 l. Total plasma clearance values averaged 0.97 +/- 0.21 l/min for lidocaine and 0.56 +/- 0.14 l/min for bupivacaine. The absorption of lidocaine could be described by a single first order absorption process, characterized by a half-life of 71 +/- 17 min in five out of six patients. The absorption of bupivacaine could be described adequately assuming two parallel first order absorption processes in all six patients. The half-lives, characterizing the fast and slow absorption processes of bupivacaine, were 50 +/- 27 min and 408 +/- 275 min, respectively. The fractions of the dose, absorbed in the fast and slow processes, were 0.35 +/- 0.17 and 0.61 +/- 0.16, respectively.« less

  11. Crowding Induces Complex Ergodic Diffusion and Dynamic Elongation of Large DNA Molecules

    PubMed Central

    Chapman, Cole D.; Gorczyca, Stephanie; Robertson-Anderson, Rae M.

    2015-01-01

    Despite the ubiquity of molecular crowding in living cells, the effects of crowding on the dynamics of genome-sized DNA are poorly understood. Here, we track single, fluorescent-labeled large DNA molecules (11, 115 kbp) diffusing in dextran solutions that mimic intracellular crowding conditions (0–40%), and determine the effects of crowding on both DNA mobility and conformation. Both DNAs exhibit ergodic Brownian motion and comparable mobility reduction in all conditions; however, crowder size (10 vs. 500 kDa) plays a critical role in the underlying diffusive mechanisms and dependence on crowder concentration. Surprisingly, in 10-kDa dextran, crowder influence saturates at ∼20% with an ∼5× drop in DNA diffusion, in stark contrast to exponentially retarded mobility, coupled to weak anomalous subdiffusion, with increasing concentration of 500-kDa dextran. Both DNAs elongate into lower-entropy states (compared to random coil conformations) when crowded, with elongation states that are gamma distributed and fluctuate in time. However, the broadness of the distribution of states and the time-dependence and length scale of elongation length fluctuations depend on both DNA and crowder size with concentration having surprisingly little impact. Results collectively show that mobility reduction and coil elongation of large crowded DNAs are due to a complex interplay between entropic effects and crowder mobility. Although elongation and initial mobility retardation are driven by depletion interactions, subdiffusive dynamics, and the drastic exponential slowing of DNA, up to ∼300×, arise from the reduced mobility of larger crowders. Our results elucidate the highly important and widely debated effects of cellular crowding on genome-sized DNA. PMID:25762333

  12. Inference and analysis of xenon outflow curves under multi-pulse injection in two-dimensional chromatography.

    PubMed

    Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan

    2013-10-11

    Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Nearly suppressed photoluminescence blinking of small-sized, blue-green-orange-red emitting single CdSe-based core/gradient alloy shell/shell quantum dots: correlation between truncation time and photoluminescence quantum yield.

    PubMed

    Roy, Debjit; Mandal, Saptarshi; De, Chayan K; Kumar, Kaushalendra; Mandal, Prasun K

    2018-04-18

    CdSe-based core/gradient alloy shell/shell semiconductor quantum dots (CGASS QDs) have been shown to be optically quite superior compared to core-shell QDs. However, very little is known about CGASS QDs at the single particle level. Photoluminescence blinking dynamics of four differently emitting (blue (λem = 510), green (λem = 532), orange (λem = 591), and red (λem = 619)) single CGASS QDs having average sizes <∼7 nm have been probed in our home-built total internal reflection fluorescence (TIRF) microscope. All four samples possess an average ON-fraction of 0.70-0.85, which hints towards nearly suppressed PL blinking in these gradiently alloyed systems. Suppression of blinking has been so far achieved with QDs having sizes greater than 10 nm and mostly emitting in the red region (λem > 600 nm). In this manuscript, we report nearly suppressed PL blinking behaviour of CGASS QDs with average sizes <∼7 nm and emitting in the entire range of the visible spectrum, i.e. from blue to green to orange to red. The probability density distribution of both ON- and OFF-event durations for all of these CGASS QDs could be fitted well with a modified inverse truncated power law with an additional exponential model equation. It has been found that unlike most of the literature reports, the power law exponent for OFF-event durations is greater than the power law exponent for ON-event durations for all four samples. This suggests that relatively large ON-event durations are interrupted by comparatively small OFF-event durations. This in turn is indicative of a suppressed non-radiative Auger recombination process for these CGASS systems. However, in these four different samples the ON-event truncation time varies inversely with the OFF-event truncation time, which hints that both the ON- and OFF-event truncation processes are dictated by some common factor. We have employed 2D joint probability distribution analysis to probe the correlation between the event durations and found that residual memory exists in both the ON- and OFF-event durations. Positively correlated successive ON-ON and OFF-OFF event durations and negatively correlated (anti-correlated) ON-OFF event durations perhaps suggest the involvement of more than one type of trapping process within the blinking framework. The timescale corresponding to the additional exponential term has been assigned to hole trapping for ON-event duration statistics. Similarly, for OFF-event duration statistics, this component suggests hole detrapping. We found that the average duration of the exponential process for the ON-event durations is an order of magnitude higher than that of the OFF-event durations. This indicates that the holes are trapped for a significantly long time. When electron trapping is followed by such a hole trapping, long ON-event durations result. We have observed long ON-event durations, as high as 50 s. The competing charge tunnelling model has been used to account for the observed blinking behaviour in these CGASS QDs. Quite interestingly, the PLQY of all of these differently emitting QDs (an ensemble level property) could be correlated with the truncation time (a property at the single particle level). A respective concomitant increase-decrease of ON-OFF event truncation times with increasing PLQY is also indicative of a varying degree of suppression of the Auger recombination processes in these four different CGASS QDs.

  14. On the origin of stretched exponential (Kohlrausch) relaxation kinetics in the room temperature luminescence decay of colloidal quantum dots.

    PubMed

    Bodunov, E N; Antonov, Yu A; Simões Gamboa, A L

    2017-03-21

    The non-exponential room temperature luminescence decay of colloidal quantum dots is often well described by a stretched exponential function. However, the physical meaning of the parameters of the function is not clear in the majority of cases reported in the literature. In this work, the room temperature stretched exponential luminescence decay of colloidal quantum dots is investigated theoretically in an attempt to identify the underlying physical mechanisms associated with the parameters of the function. Three classes of non-radiative transition processes between the excited and ground states of colloidal quantum dots are discussed: long-range resonance energy transfer, multiphonon relaxation, and contact quenching without diffusion. It is shown that multiphonon relaxation cannot explain a stretched exponential functional form of the luminescence decay while such dynamics of relaxation can be understood in terms of long-range resonance energy transfer to acceptors (molecules, quantum dots, or anharmonic molecular vibrations) in the environment of the quantum dots acting as energy-donors or by contact quenching by acceptors (surface traps or molecules) distributed statistically on the surface of the quantum dots. These non-radiative transition processes are assigned to different ranges of the stretching parameter β.

  15. Application of a Short Intracellular pH Method to Flow Cytometry for Determining Saccharomyces cerevisiae Vitality ▿

    PubMed Central

    Weigert, Claudia; Steffler, Fabian; Kurz, Tomas; Shellhammer, Thomas H.; Methner, Frank-Jürgen

    2009-01-01

    The measurement of yeast's intracellular pH (ICP) is a proven method for determining yeast vitality. Vitality describes the condition or health of viable cells as opposed to viability, which defines living versus dead cells. In contrast to fluorescence photometric measurements, which show only average ICP values of a population, flow cytometry allows the presentation of an ICP distribution. By examining six repeated propagations with three separate growth phases (lag, exponential, and stationary), the ICP method previously established for photometry was transferred successfully to flow cytometry by using the pH-dependent fluorescent probe 5,6-carboxyfluorescein. The correlation between the two methods was good (r2 = 0.898, n = 18). With both methods it is possible to track the course of growth phases. Although photometry did not yield significant differences between exponentially and stationary phases (P = 0.433), ICP via flow cytometry did (P = 0.012). Yeast in an exponential phase has a unimodal ICP distribution, reflective of a homogeneous population; however, yeast in a stationary phase displays a broader ICP distribution, and subpopulations could be defined by using the flow cytometry method. In conclusion, flow cytometry yielded specific evidence of the heterogeneity in vitality of a yeast population as measured via ICP. In contrast to photometry, flow cytometry increases information about the yeast population's vitality via a short measurement, which is suitable for routine analysis. PMID:19581482

  16. Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time

    NASA Astrophysics Data System (ADS)

    Himeoka, Yusuke; Kaneko, Kunihiko

    2017-04-01

    The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.

  17. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  18. Accumulated distribution of material gain at dislocation crystal growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakin, V. I., E-mail: rakin@geo.komisc.ru

    2016-05-15

    A model for slowing down the tangential growth rate of an elementary step at dislocation crystal growth is proposed based on the exponential law of impurity particle distribution over adsorption energy. It is established that the statistical distribution of material gain on structurally equivalent faces obeys the Erlang law. The Erlang distribution is proposed to be used to calculate the occurrence rates of morphological combinatorial types of polyhedra, presenting real simple crystallographic forms.

  19. Evolution and History in a new "Mathematical SETI" model

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2014-01-01

    In a recent paper (Maccone, 2011 [15]) and in a recent book (Maccone, 2012 [17]), this author proposed a new mathematical model capable of merging SETI and Darwinian Evolution into a single mathematical scheme. This model is based on exponentials and lognormal probability distributions, called "b-lognormals" if they start at any positive time b ("birth") larger than zero. Indeed: Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose, as it happened on Earth. In 2008 (Maccone, 2008 [9]) this author firstly provided a statistical generalization of the Drake equation where the number N of communicating ET civilizations in the Galaxy was shown to follow the lognormal probability distribution. This fact is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution if the number of factors is increased at will, i.e. it approaches infinity. Also, in Maccone (2011 [15]), it was shown that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of b-lognormal distributions constrained between the time axis and the exponential growth curve. This was a brand-new result. And one more new and far-reaching idea was to define Darwinian Evolution as a particular realization of a stochastic process called Geometric Brownian Motion (GBM) having the above exponential as its own mean value curve. The b-lognormals may be also be interpreted as the lifespan of any living being, let it be a cell, or an animal, a plant, a human, or even the historic lifetime of any civilization. In Maccone, (2012 [17, Chapters 6, 7, 8 and 11]), as well as in the present paper, we give important exact equations yielding the b-lognormal when its birth time, senility-time (descending inflexion point) and death time (where the tangent at senility intercepts the time axis) are known. These also are brand-new results. In particular, the σ=1 b-lognormals are shown to be related to the golden ratio, so famous in the arts and in architecture, and these special b-lognormals we call "golden b-lognormals". Applying this new mathematical apparatus to Human History leads to the discovery of the exponential trend of progress between Ancient Greece and the current USA Empire as the envelope of the b-lognormals of all Western Civilizations over a period of 2500 years. We then invoke Shannon's Information Theory. The entropy of the obtained b-lognormals turns out to be the index of "development level" reached by each historic civilization. As a consequence, we get a numerical estimate of the entropy difference (i.e. the difference in the evolution levels) between any two civilizations. In particular, this was the case when Spaniards first met with Aztecs in 1519, and we find the relevant entropy difference between Spaniards an Aztecs to be 3.84 bits/individual over a period of about 50 centuries of technological difference. In a similar calculation, the entropy difference between the first living organism on Earth (RNA?) and Humans turns out to equal 25.57 bits/individual over a period of 3.5 billion years of Darwinian Evolution. Finally, we extrapolate our exponentials into the future, which is of course arbitrary, but is the best Humans can do before they get in touch with any alien civilization. The results are appalling: the entropy difference between aliens 1 million years more advanced than Humans is of the order of 1000 bits/individual, while 10,000 bits/individual would be requested to any Civilization wishing to colonize the whole Galaxy (Fermi Paradox). In conclusion, we have derived a mathematical model capable of estimating how much more advanced than humans an alien civilization will be when SETI succeeds.

  20. Verification of the exponential model of body temperature decrease after death in pigs.

    PubMed

    Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal

    2005-09-01

    The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.

  1. Single Session Web-Based Counselling: A Thematic Analysis of Content from the Perspective of the Client

    ERIC Educational Resources Information Center

    Rodda, S. N.; Lubman, D. I.; Cheetham, A.; Dowling, N. A.; Jackson, A. C.

    2015-01-01

    Despite the exponential growth of non-appointment-based web counselling, there is limited information on what happens in a single session intervention. This exploratory study, involving a thematic analysis of 85 counselling transcripts of people seeking help for problem gambling, aimed to describe the presentation and content of online…

  2. Nocturnal Dynamics of Sleep-Wake Transitions in Patients With Narcolepsy.

    PubMed

    Zhang, Xiaozhe; Kantelhardt, Jan W; Dong, Xiao Song; Krefting, Dagmar; Li, Jing; Yan, Han; Pillmann, Frank; Fietze, Ingo; Penzel, Thomas; Zhao, Long; Han, Fang

    2017-02-01

    We investigate how characteristics of sleep-wake dynamics in humans are modified by narcolepsy, a clinical condition that is supposed to destabilize sleep-wake regulation. Subjects with and without cataplexy are considered separately. Differences in sleep scoring habits as a possible confounder have been examined. Four groups of subjects are considered: narcolepsy patients from China with (n = 88) and without (n = 15) cataplexy, healthy controls from China (n = 110) and from Europe (n = 187, 2 nights each). After sleep-stage scoring and calculation of sleep characteristic parameters, the distributions of wake-episode durations and sleep-episode durations are determined for each group and fitted by power laws (exponent α) and by exponentials (decay time τ). We find that wake duration distributions are consistent with power laws for healthy subjects (China: α = 0.88, Europe: α = 1.02). Wake durations in all groups of narcolepsy patients, however, follow the exponential law (τ = 6.2-8.1 min). All sleep duration distributions are best fitted by exponentials on long time scales (τ = 34-82 min). We conclude that narcolepsy mainly alters the control of wake-episode durations but not sleep-episode durations, irrespective of cataplexy. Observed distributions of shortest wake and sleep durations suggest that differences in scoring habits regarding the scoring of short-term sleep stages may notably influence the fitting parameters but do not affect the main conclusion. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  3. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.

    PubMed

    Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew

    2014-07-08

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.

  4. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation

    PubMed Central

    2015-01-01

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441

  5. Quantifying patterns of research interest evolution

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.

  6. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  7. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Cell Division and Evolution of Biological Tissues

    NASA Astrophysics Data System (ADS)

    Rivier, Nicolas; Arcenegui-Siemens, Xavier; Schliecker, Gudrun

    A tissue is a geometrical, space-filling, random cellular network; it remains in this steady state while individual cells divide. Cell division (fragmentation) is a local, elementary topological transformation which establishes statistical equilibrium of the structure. Statistical equilibrium is characterized by observable relations (Lewis, Aboav) between cell shapes, sizes and those of their neighbours, obtained through maximum entropy and topological correlation extending to nearest neighbours only, i.e. maximal randomness. For a two-dimensional tissue (epithelium), the distribution of cell shapes and that of mother and daughter cells can be obtained from elementary geometrical and physical arguments, except for an exponential factor favouring division of larger cells, and exponential and combinatorial factors encouraging a most symmetric division. The resulting distributions are very narrow, and stationarity severely restricts the range of an adjustable structural parameter

  9. Exponential Thurston maps and limits of quadratic differentials

    NASA Astrophysics Data System (ADS)

    Hubbard, John; Schleicher, Dierk; Shishikura, Mitsuhiro

    2009-01-01

    We give a topological characterization of postsingularly finite topological exponential maps, i.e., universal covers g\\colon{C}to{C}setminus\\{0\\} such that 0 has a finite orbit. Such a map either is Thurston equivalent to a unique holomorphic exponential map λ e^z or it has a topological obstruction called a degenerate Levy cycle. This is the first analog of Thurston's topological characterization theorem of rational maps, as published by Douady and Hubbard, for the case of infinite degree. One main tool is a theorem about the distribution of mass of an integrable quadratic differential with a given number of poles, providing an almost compact space of models for the entire mass of quadratic differentials. This theorem is given for arbitrary Riemann surfaces of finite type in a uniform way.

  10. Application of the sine-Poisson equation in solar magnetostatics

    NASA Technical Reports Server (NTRS)

    Webb, G. M.; Zank, G. P.

    1990-01-01

    Solutions of the sine-Poisson equations are used to construct a class of isothermal magnetostatic atmospheres, with one ignorable coordinate corresponding to a uniform gravitational field in a plane geometry. The distributed current in the model (j) is directed along the x-axis, where x is the horizontal ignorable coordinate; (j) varies as the sine of the magnetostatic potential and falls off exponentially with distance vertical to the base with an e-folding distance equal to the gravitational scale height. Solutions for the magnetostatic potential A corresponding to the one-soliton, two-soliton, and breather solutions of the sine-Gordon equation are studied. Depending on the values of the free parameters in the soliton solutions, horizontally periodic magnetostatic structures are obtained possessing either a single X-type neutral point, multiple neural X-points, or solutions without X-points.

  11. Coherent Forward Broadening in Cold Atom Clouds

    NASA Astrophysics Data System (ADS)

    Sutherland, R. T.; Robicheaux, Francis

    2016-05-01

    It is shown that homogeneous line-broadening in a diffuse cold atom cloud is proportional to the resonant optical depth of the cloud. Further, it is demonstrated how the strong directionality of the coherent interactions causes the cloud's spectra to depend strongly on its shape, even when the cloud is held at constant densities. These two numerical observations can be predicted analytically by extending the single photon wavefunction model. Lastly, elongating a cloud along the line of laser propagation causes the excitation probability distribution to deviate from the exponential decay predicted by the Beer-Lambert law to the extent where the atoms in the back of the cloud are more excited than the atoms in the front. These calculations are conducted at low densities relevant to recent experiments. This work was supported by the National Science Foundation under Grant No. 1404419-PHY.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  13. Universal model for collective access patterns in the Internet traffic dynamics: A superstatistical approach

    NASA Astrophysics Data System (ADS)

    Tamazian, A.; Nguyen, V. D.; Markelov, O. A.; Bogachev, M. I.

    2016-07-01

    We suggest a universal phenomenological description for the collective access patterns in the Internet traffic dynamics both at local and wide area network levels that takes into account erratic fluctuations imposed by cooperative user behaviour. Our description is based on the superstatistical approach and leads to the q-exponential inter-session time and session size distributions that are also in perfect agreement with empirical observations. The validity of the proposed description is confirmed explicitly by the analysis of complete 10-day traffic traces from the WIDE backbone link and from the local campus area network downlink from the Internet Service Provider. Remarkably, the same functional forms have been observed in the historic access patterns from single WWW servers. The suggested approach effectively accounts for the complex interplay of both “calm” and “bursty” user access patterns within a single-model setting. It also provides average sojourn time estimates with reasonable accuracy, as indicated by the queuing system performance simulation, this way largely overcoming the failure of Poisson modelling of the Internet traffic dynamics.

  14. A heuristic method for consumable resource allocation in multi-class dynamic PERT networks

    NASA Astrophysics Data System (ADS)

    Yaghoubi, Saeed; Noori, Siamak; Mazdeh, Mohammad Mahdavi

    2013-06-01

    This investigation presents a heuristic method for consumable resource allocation problem in multi-class dynamic Project Evaluation and Review Technique (PERT) networks, where new projects from different classes (types) arrive to system according to independent Poisson processes with different arrival rates. Each activity of any project is operated at a devoted service station located in a node of the network with exponential distribution according to its class. Indeed, each project arrives to the first service station and continues its routing according to precedence network of its class. Such system can be represented as a queuing network, while the discipline of queues is first come, first served. On the basis of presented method, a multi-class system is decomposed into several single-class dynamic PERT networks, whereas each class is considered separately as a minisystem. In modeling of single-class dynamic PERT network, we use Markov process and a multi-objective model investigated by Azaron and Tavakkoli-Moghaddam in 2007. Then, after obtaining the resources allocated to service stations in every minisystem, the final resources allocated to activities are calculated by the proposed method.

  15. Mechanical slowing-down of cytoplasmic diffusion allows in vivo counting of proteins in individual cells

    PubMed Central

    Okumus, Burak; Landgraf, Dirk; Lai, Ghee Chuan; Bakhsi, Somenath; Arias-Castro, Juan Carlos; Yildiz, Sadik; Huh, Dann; Fernandez-Lopez, Raul; Peterson, Celeste N.; Toprak, Erdal; El Karoui, Meriem; Paulsson, Johan

    2016-01-01

    Many key regulatory proteins in bacteria are present in too low numbers to be detected with conventional methods, which poses a particular challenge for single-cell analyses because such proteins can contribute greatly to phenotypic heterogeneity. Here we develop a microfluidics-based platform that enables single-molecule counting of low-abundance proteins by mechanically slowing-down their diffusion within the cytoplasm of live Escherichia coli (E. coli) cells. Our technique also allows for automated microscopy at high throughput with minimal perturbation to native physiology, as well as viable enrichment/retrieval. We illustrate the method by analysing the control of the master regulator of the E. coli stress response, RpoS, by its adapter protein, SprE (RssB). Quantification of SprE numbers shows that though SprE is necessary for RpoS degradation, it is expressed at levels as low as 3–4 molecules per average cell cycle, and fluctuations in SprE are approximately Poisson distributed during exponential phase with no sign of bursting. PMID:27189321

  16. Decision Support System for hydrological extremes

    NASA Astrophysics Data System (ADS)

    Bobée, Bernard; El Adlouni, Salaheddine

    2014-05-01

    The study of the tail behaviour of extreme event distributions is important in several applied statistical fields such as hydrology, finance, and telecommunications. For example in hydrology, it is important to estimate adequately extreme quantiles in order to build and manage safe and effective hydraulic structures (dams, for example). Two main classes of distributions are used in hydrological frequency analysis: the class D of sub-exponential (Gamma (G2), Gumbel, Halphen type A (HA), Halphen type B (HB)…) and the class C of regularly varying distributions (Fréchet, Log-Pearson, Halphen type IB …) with a heavier tail. A Decision Support System (DSS) based on the characterization of the right tail, corresponding low probability of excedence p (high return period T=1/p, in hydrology), has been developed. The DSS allows discriminating between the class C and D and in its last version, a new prior step is added in order to test Lognormality. Indeed, the right tail of the Lognormal distribution (LN) is between the tails of distributions of the classes C and D; studies indicated difficulty with the discrimination between LN and distributions of the classes C and D. Other tools are useful to discriminate between distributions of the same class D (HA, HB and G2; see other communication). Some numerical illustrations show that, the DSS allows discriminating between Lognormal, regularly varying and sub-exponential distributions; and lead to coherent conclusions. Key words: Regularly varying distributions, subexponential distributions, Decision Support System, Heavy tailed distribution, Extreme value theory

  17. Transition from Exponential to Power Law Income Distributions in a Chaotic Market

    NASA Astrophysics Data System (ADS)

    Pellicer-Lostao, Carmen; Lopez-Ruiz, Ricardo

    Economy is demanding new models, able to understand and predict the evolution of markets. To this respect, Econophysics offers models of markets as complex systems, that try to comprehend macro-, system-wide states of the economy from the interaction of many agents at micro-level. One of these models is the gas-like model for trading markets. This tries to predict money distributions in closed economies and quite simply, obtains the ones observed in real economies. However, it reveals technical hitches to explain the power law distribution, observed in individuals with high incomes. In this work, nonlinear dynamics is introduced in the gas-like model in an effort to overcomes these flaws. A particular chaotic dynamics is used to break the pairing symmetry of agents (i, j) ⇔ (j, i). The results demonstrate that a "chaotic gas-like model" can reproduce the Exponential and Power law distributions observed in real economies. Moreover, it controls the transition between them. This may give some insight of the micro-level causes that originate unfair distributions of money in a global society. Ultimately, the chaotic model makes obvious the inherent instability of asymmetric scenarios, where sinks of wealth appear and doom the market to extreme inequality.

  18. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A. J.; Jarvis, S. J.; Hewitt, C. N.

    2015-10-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end-use. With respect to energy, the growth of industrial society appears to have been near-exponential for the last 160 years. We provide evidence that indicates that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near-optimal directed networks (roads, railways, flight paths, pipelines, cables etc.). However, despite this continual striving for optimisation, the distribution efficiencies of these networks must decline over time as they expand due to path lengths becoming longer and more tortuous. Therefore, to maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system, namely at the points of acquisition and end-use of resources. We postulate that the maintenance of the growth of industrial society, as measured by global energy use, at the observed rate of ~ 2.4 % yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  19. A mathematical model for the occurrence of historical events

    NASA Astrophysics Data System (ADS)

    Ohnishi, Teruaki

    2017-12-01

    A mathematical model was proposed for the frequency distribution of historical inter-event time τ. A basic ingredient was constructed by assuming the significance of a newly occurring historical event depending on the magnitude of a preceding event, the decrease of its significance by oblivion during the successive events, and an independent Poisson process for the occurrence of the event. The frequency distribution of τ was derived by integrating the basic ingredient with respect to all social fields and to all stake holders. The function of such a distribution was revealed as the forms of an exponential type, a power law type or an exponential-with-a-tail type depending on the values of constants appearing in the ingredient. The validity of this model was studied by applying it to the two cases of Modern China and Northern Ireland Troubles, where the τ-distribution varies depending on the different countries interacting with China and on the different stage of history of the Troubles, respectively. This indicates that history is consisted from many components with such different types of τ-distribution, which are the similar situation to the cases of other general human activities.

  20. Phosphotyrosine-mediated LAT assembly on membranes drives kinetic bifurcation in recruitment dynamics of the Ras activator SOS

    DOE PAGES

    Huang, William Y. C.; Yan, Qingrong; Lin, Wan-Chen; ...

    2016-07-01

    The assembly of cell surface receptors with downstream signaling molecules is a commonly occurring theme in multiple signaling systems. However, little is known about how these assemblies modulate reaction kinetics and the ultimate propagation of signals. Here, we reconstitute phosphotyrosine-mediated assembly of extended linker for the activation of T cells (LAT):growth factor receptor-bound protein 2 (Grb2):Son of Sevenless (SOS) networks, derived from the T-cell receptor signaling system, on supported membranes. Single-molecule dwell time distributions reveal two, well-differentiated kinetic species for both Grb2 and SOS on the LAT assemblies. The majority fraction of membrane-recruited Grb2 and SOS both exhibit fast kineticsmore » and single exponential dwell time distributions, with average dwell times of hundreds of milliseconds. The minor fraction exhibits much slower kinetics, extending the dwell times to tens of seconds. Considering this result in the context of the multistep process by which the Ras GEF (guanine nucleotide exchange factor) activity of SOS is activated indicates that kinetic stabilization from the LAT assembly may be important. This kinetic proofreading effect would additionally serve as a stochastic noise filter by reducing the relative probability of spontaneous SOS activation in the absence of receptor triggering. In conclusion, the generality of receptor-mediated assembly suggests that such effects may play a role in multiple receptor proximal signaling processes.« less

  1. Phosphotyrosine-mediated LAT assembly on membranes drives kinetic bifurcation in recruitment dynamics of the Ras activator SOS

    PubMed Central

    Huang, William Y. C.; Yan, Qingrong; Lin, Wan-Chen; Chung, Jean K.; Hansen, Scott D.; Christensen, Sune M.; Tu, Hsiung-Lin; Kuriyan, John; Groves, Jay T.

    2016-01-01

    The assembly of cell surface receptors with downstream signaling molecules is a commonly occurring theme in multiple signaling systems. However, little is known about how these assemblies modulate reaction kinetics and the ultimate propagation of signals. Here, we reconstitute phosphotyrosine-mediated assembly of extended linker for the activation of T cells (LAT):growth factor receptor-bound protein 2 (Grb2):Son of Sevenless (SOS) networks, derived from the T-cell receptor signaling system, on supported membranes. Single-molecule dwell time distributions reveal two, well-differentiated kinetic species for both Grb2 and SOS on the LAT assemblies. The majority fraction of membrane-recruited Grb2 and SOS both exhibit fast kinetics and single exponential dwell time distributions, with average dwell times of hundreds of milliseconds. The minor fraction exhibits much slower kinetics, extending the dwell times to tens of seconds. Considering this result in the context of the multistep process by which the Ras GEF (guanine nucleotide exchange factor) activity of SOS is activated indicates that kinetic stabilization from the LAT assembly may be important. This kinetic proofreading effect would additionally serve as a stochastic noise filter by reducing the relative probability of spontaneous SOS activation in the absence of receptor triggering. The generality of receptor-mediated assembly suggests that such effects may play a role in multiple receptor proximal signaling processes. PMID:27370798

  2. Phosphotyrosine-mediated LAT assembly on membranes drives kinetic bifurcation in recruitment dynamics of the Ras activator SOS.

    PubMed

    Huang, William Y C; Yan, Qingrong; Lin, Wan-Chen; Chung, Jean K; Hansen, Scott D; Christensen, Sune M; Tu, Hsiung-Lin; Kuriyan, John; Groves, Jay T

    2016-07-19

    The assembly of cell surface receptors with downstream signaling molecules is a commonly occurring theme in multiple signaling systems. However, little is known about how these assemblies modulate reaction kinetics and the ultimate propagation of signals. Here, we reconstitute phosphotyrosine-mediated assembly of extended linker for the activation of T cells (LAT):growth factor receptor-bound protein 2 (Grb2):Son of Sevenless (SOS) networks, derived from the T-cell receptor signaling system, on supported membranes. Single-molecule dwell time distributions reveal two, well-differentiated kinetic species for both Grb2 and SOS on the LAT assemblies. The majority fraction of membrane-recruited Grb2 and SOS both exhibit fast kinetics and single exponential dwell time distributions, with average dwell times of hundreds of milliseconds. The minor fraction exhibits much slower kinetics, extending the dwell times to tens of seconds. Considering this result in the context of the multistep process by which the Ras GEF (guanine nucleotide exchange factor) activity of SOS is activated indicates that kinetic stabilization from the LAT assembly may be important. This kinetic proofreading effect would additionally serve as a stochastic noise filter by reducing the relative probability of spontaneous SOS activation in the absence of receptor triggering. The generality of receptor-mediated assembly suggests that such effects may play a role in multiple receptor proximal signaling processes.

  3. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  4. Rock size-frequency distributions on Mars and implications for Mars Exploration Rover landing safety and operations

    NASA Astrophysics Data System (ADS)

    Golombek, M. P.; Haldemann, A. F. C.; Forsberg-Taylor, N. K.; DiMaggio, E. N.; Schroeder, R. D.; Jakosky, B. M.; Mellon, M. T.; Matijevic, J. R.

    2003-10-01

    The cumulative fractional area covered by rocks versus diameter measured at the Pathfinder site was predicted by a rock distribution model that follows simple exponential functions that approach the total measured rock abundance (19%), with a steep decrease in rocks with increasing diameter. The distribution of rocks >1.5 m diameter visible in rare boulder fields also follows this steep decrease with increasing diameter. The effective thermal inertia of rock populations calculated from a simple empirical model of the effective inertia of rocks versus diameter shows that most natural rock populations have cumulative effective thermal inertias of 1700-2100 J m-2 s-0.5 K-1 and are consistent with the model rock distributions applied to total rock abundance estimates. The Mars Exploration Rover (MER) airbags have been successfully tested against extreme rock distributions with a higher percentage of potentially hazardous triangular buried rocks than observed at the Pathfinder and Viking landing sites. The probability of the lander impacting a >1 m diameter rock in the first 2 bounces is <3% and <5% for the Meridiani and Gusev landing sites, respectively, and is <0.14% and <0.03% for rocks >1.5 m and >2 m diameter, respectively. Finally, the model rock size-frequency distributions indicate that rocks >0.1 m and >0.3 m in diameter, large enough to place contact sensor instruments against and abrade, respectively, should be plentiful within a single sol's drive at the Meridiani and Gusev landing sites.

  5. [Observation of animal behavior by revolving activity cage method: A new automatic measuring and recording system of motor activity of a mouse by means of revolving activity cage is presented (author's transl)].

    PubMed

    Nakamura, K

    1978-09-01

    With this system, several parameters can be recorded continuously over several months without exterior stimuli. Time per revolution is counted and punched into the paper tape as binary coded numbers, and the number of revolutions and the frequency of "passage" in a given time are printed out on a rolled paper by a digital recorder. "Passage" is defined as one revolving trial without a pause over a fixed time (criterion time) and used as a behavioral unit of "stop and go". The raw data on the paper tape are processed and analyzed with a general-purpose computer. It was confirmed that when a mouse became well accustomed to the revolving activity cage, the time per revolution followed the law of exponential distribution probability, while the length of passage (i.e. the number of revolutions per revolving trial) followed that of geometrical distribution probability. The revolving activity of mice treated with single subcutaneous injection of methamphetamine was examined using these parameters.

  6. An Ensemble System Based on Hybrid EGARCH-ANN with Different Distributional Assumptions to Predict S&P 500 Intraday Volatility

    NASA Astrophysics Data System (ADS)

    Lahmiri, S.; Boukadoum, M.

    2015-10-01

    Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.

  7. All-Optical Photoacoustic Sensors for Steel Rebar Corrosion Monitoring

    PubMed Central

    Du, Cong; Owusu Twumasi, Jones; Tang, Qixiang; Guo, Xu; Zhou, Jingcheng; Yu, Tzuyang; Wang, Xingwei

    2018-01-01

    This article presents an application of an active all-optical photoacoustic sensing system with four elements for steel rebar corrosion monitoring. The sensor utilized a photoacoustic mechanism of gold nanocomposites to generate 8 MHz broadband ultrasound pulses in 0.4 mm compact space. A nanosecond 532 nm pulsed laser and 400 μm multimode fiber were employed to incite an ultrasound reaction. The fiber Bragg gratings were used as distributed ultrasound detectors. Accelerated corrosion testing was applied to four sections of a single steel rebar with four different corrosion degrees. Our results demonstrated that the mass loss of steel rebar displayed an exponential growth with ultrasound frequency shifts. The sensitivity of the sensing system was such that 0.175 MHz central frequency reduction corresponded to 0.02 g mass loss of steel rebar corrosion. It was proved that the all-optical photoacoustic sensing system can actively evaluate the corrosion of steel rebar via ultrasound spectrum. This multipoint all-optical photoacoustic method is promising for embedment into a concrete structure for distributed corrosion monitoring. PMID:29702554

  8. Determine Neuronal Tuning Curves by Exploring Optimum Firing Rate Distribution for Information Efficiency

    PubMed Central

    Han, Fang; Wang, Zhijie; Fan, Hong

    2017-01-01

    This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760

  9. Single impacts of keV fullerene ions on free standing graphene: Emission of ions and electrons from confined volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verkhoturov, Stanislav V.; Geng, Sheng; Schweikert, Emile A., E-mail: schweikert@chem.tamu.edu

    We present the first data from individual C{sub 60} impacting one to four layer graphene at 25 and 50 keV. Negative secondary ions and electrons emitted in transmission were recorded separately from each impact. The yields for C{sub n}{sup −} clusters are above 10% for n ≤ 4, they oscillate with electron affinities and decrease exponentially with n. The result can be explained with the aid of MD simulation as a post-collision process where sufficient vibrational energy is accumulated around the rim of the impact hole for sputtering of carbon clusters. The ionization probability can be estimated by comparing experimentalmore » yields of C{sub n}{sup −} with those of C{sub n}{sup 0} from MD simulation, where it increases exponentially with n. The ionization probability can be approximated with ejecta from a thermally excited (3700 K) rim damped by cluster fragmentation and electron detachment. The experimental electron probability distributions are Poisson-like. On average, three electrons of thermal energies are emitted per impact. The thermal excitation model invoked for C{sub n}{sup −} emission can also explain the emission of electrons. The interaction of C{sub 60} with graphene is fundamentally different from impacts on 3D targets. A key characteristic is the high degree of ionization of the ejecta.« less

  10. Heterogeneous distribution of antigens on human platelets demonstrated by fluorescence flow cytometry.

    PubMed

    Dunstan, R A; Simpson, M B

    1985-12-01

    We have used fluorescence flow cytometry to analyse cell-to-cell variability in the density of platelet ABH, Ii, Lewis, P, P1A1, Bak,a and HLA class I antigens. Human IgG and IgM antibodies were used in a two-stage assay with goat FITC-conjugated antihuman IgG (H&L) antibody as the label, followed by single cell analysis of 10 000 platelets per sample using a 256-channel fluorescence flow cytometer (Becton-Dickinson FACS Analyser). Computer analysis of fluorescence intensity histograms for mean and peak channel and coefficient of variation shows that the degree of heterogeneity in platelet antigen density varies with each particular blood group. The broad fluorescence distribution curves with oligosaccharide antigens (CVs: A = 53, B = 40, I = 44, Lea = 40, P = 40) indicate that these antigens possess a greater variability in the number of sites per cell compared to the more homogeneous distribution of P1,A1 BaK,a and HLA (CVs: P1A1 = 24, HLA = 30). These findings may partly account for the mechanism by which transfusion of ABO-incompatible platelets results in a biphasic survival curve, with a period of early rapid removal of those platelets with a high density of antigen sites, followed by a relatively normal survival curve for those platelets that possess only a few or no antigen sites. In contrast, P1A1 and HLA sites are less variable in number from one platelet to another in a given donor, and immune-mediated removal would be more likely to approximate a single exponential curve.

  11. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  12. Base stock system for patient vs impatient customers with varying demand distribution

    NASA Astrophysics Data System (ADS)

    Fathima, Dowlath; Uduman, P. Sheik

    2013-09-01

    An optimal Base-Stock inventory policy for Patient and Impatient Customers using finite-horizon models is examined. The Base stock system for Patient and Impatient customers is a different type of inventory policy. In case of the model I, Base stock for Patient customer case is evaluated using the Truncated Exponential Distribution. The model II involves the study of Base-stock inventory policies for Impatient customer. A study on these systems reveals that the Customers wait until the arrival of the next order or the customers leaves the system which leads to lost sale. In both the models demand during the period [0, t] is taken to be a random variable. In this paper, Truncated Exponential Distribution satisfies the Base stock policy for the patient customer as a continuous model. So far the Base stock for Impatient Customers leaded to a discrete case but, in this paper we have modeled this condition into a continuous case. We justify this approach mathematically and also numerically.

  13. The topology of large Open Connectome networks for the human brain.

    PubMed

    Gastner, Michael T; Ódor, Géza

    2016-06-07

    The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.

  14. The topology of large Open Connectome networks for the human brain

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.; Ódor, Géza

    2016-06-01

    The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.

  15. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.

    2018-01-01

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.

  16. Periodicity and global exponential stability of generalized Cohen-Grossberg neural networks with discontinuous activations and mixed delays.

    PubMed

    Wang, Dongshu; Huang, Lihong

    2014-03-01

    In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Probability Distributions for Random Quantum Operations

    NASA Astrophysics Data System (ADS)

    Schultz, Kevin

    Motivated by uncertainty quantification and inference of quantum information systems, in this work we draw connections between the notions of random quantum states and operations in quantum information with probability distributions commonly encountered in the field of orientation statistics. This approach identifies natural sample spaces and probability distributions upon these spaces that can be used in the analysis, simulation, and inference of quantum information systems. The theory of exponential families on Stiefel manifolds provides the appropriate generalization to the classical case. Furthermore, this viewpoint motivates a number of additional questions into the convex geometry of quantum operations relative to both the differential geometry of Stiefel manifolds as well as the information geometry of exponential families defined upon them. In particular, we draw on results from convex geometry to characterize which quantum operations can be represented as the average of a random quantum operation. This project was supported by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center Contract Number 2012-12050800010.

  18. GISAXS modelling of helium-induced nano-bubble formation in tungsten and comparison with TEM

    NASA Astrophysics Data System (ADS)

    Thompson, Matt; Sakamoto, Ryuichi; Bernard, Elodie; Kirby, Nigel; Kluth, Patrick; Riley, Daniel; Corr, Cormac

    2016-05-01

    Grazing-incidence small angle x-ray scattering (GISAXS) is a powerful non-destructive technique for the measurement of nano-bubble formation in tungsten under helium plasma exposure. Here, we present a comparative study between transmission electron microscopy (TEM) and GISAXS measurements of nano-bubble formation in tungsten exposed to helium plasma in the Large Helical Device (LHD) fusion experiment. Both techniques are in excellent agreement, suggesting that nano-bubbles range from spheroidal to ellipsoidal, displaying exponential diameter distributions with mean diameters μ=0.68 ± 0.04 nm and μ=0.6 ± 0.1 nm measured by TEM and GISAXS respectively. Depth distributions were also computed, with calculated exponential depth distributions with mean depths of 8.4 ± 0.5 nm and 9.1 ± 0.4 nm for TEM and GISAXS. In GISAXS modelling, spheroidal particles were fitted with an aspect ratio ε=0.7 ± 0.1. The GISAXS model used is described in detail.

  19. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    PubMed

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Momentum distributions for the quantum delta-kicked rotor with decoherence

    PubMed

    Vant; Ball; Christensen

    2000-05-01

    We report on the momentum distribution line shapes for the quantum delta-kicked rotor in the presence of environment induced decoherence. Experimental and numerical results are presented. In the experiment ultracold cesium atoms are subjected to a pulsed standing wave of near resonant light. Spontaneous scattering of photons destroys dynamical localization. For the scattering rates used in our experiment the momentum distribution shapes remain essentially exponential.

  1. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  2. Topics in the Sequential Design of Experiments

    DTIC Science & Technology

    1992-03-01

    decision , unless so designated by other documentation. 12a. DISTRIBUTION /AVAILABIIUTY STATEMENT 12b. DISTRIBUTION CODE Approved for public release...3 0 1992 D 14. SUBJECT TERMS 15. NUMBER OF PAGES12 Design of Experiments, Renewal Theory , Sequential Testing 1 2. PRICE CODE Limit Theory , Local...distributions for one parameter exponential families," by Michael Woodroofe. Stntca, 2 (1991), 91-112. [6] "A non linear renewal theory for a functional of

  3. Statistical properties of effective drought index (EDI) for Seoul, Busan, Daegu, Mokpo in South Korea

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyeok; Kim, Ki-Beom; Chang, Heon-Young

    2014-08-01

    Time series of drought indices has been considered mostly in view of temporal and spatial distributions of a drought index so far. Here we investigate the statistical properties of a daily Effective Drought Index (EDI) itself for Seoul, Busan, Daegu, Mokpo for the period of 100 years from 1913 to 2012. We have found that both in dry and wet seasons the distribution of EDI as a function of EDI follows the Gaussian function. In dry season the shape of the Gaussian function is characteristically broader than that in wet seasons. The total number of drought days during the period we have analyzed is related both to the mean value and more importantly to the standard deviation. We have also found that according to the distribution of the number of occasions where the EDI values of several consecutive days are all less than a threshold, the distribution follows the exponential distribution. The slope of the best fit becomes steeper not only as the critical EDI value becomes more negative but also as the number of consecutive days increases. The slope of the exponential distribution becomes steeper as the number of the city in which EDI is simultaneously less than a critical EDI in a row increases. Finally, we conclude by pointing out implications of our findings.

  4. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. The Mass Distribution of Stellar-mass Black Holes

    NASA Astrophysics Data System (ADS)

    Farr, Will M.; Sravan, Niharika; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya; Kalogera, Vicky

    2011-11-01

    We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M 1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1%; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1%>4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1%>4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass distribution. The results on the combined sample of low- and high-mass systems are in qualitative agreement with Fryer & Kalogera, although the presence of a mass gap remains theoretically unexplained.

  6. Ascending-ramp biphasic waveform has a lower defibrillation threshold and releases less troponin I than a truncated exponential biphasic waveform.

    PubMed

    Huang, Jian; Walcott, Gregory P; Ruse, Richard B; Bohanan, Scott J; Killingsworth, Cheryl R; Ideker, Raymond E

    2012-09-11

    We tested the hypothesis that the shape of the shock waveform affects not only the defibrillation threshold but also the amount of cardiac damage. Defibrillation thresholds were determined for 11 waveforms-3 ascending-ramp waveforms, 3 descending-ramp waveforms, 3 rectilinear first-phase biphasic waveforms, a Gurvich waveform, and a truncated exponential biphasic waveform-in 6 pigs with electrodes in the right ventricular apex and superior vena cava. The ascending, descending, and rectilinear waveforms had 4-, 8-, and 16-millisecond first phases and a 3.5-millisecond rectilinear second phase that was half the voltage of the first phase. The exponential biphasic waveform had a 60% first-phase and a 50% second-phase tilt. In a second study, we attempted to defibrillate after 10 seconds of ventricular fibrillation with a single ≈30-J shock (6 pigs successfully defibrillated with 8-millisecond ascending, 8-millisecond rectilinear, and truncated exponential biphasic waveforms). Troponin I blood levels were determined before and 2 to 10 hours after the shock. The lowest-energy defibrillation threshold was for the 8-milliseconds ascending ramp (14.6±7.3 J [mean±SD]), which was significantly less than for the truncated exponential (19.6±6.3 J). Six hours after shock, troponin I was significantly less for the ascending-ramp waveform (0.80±0.54 ng/mL) than for the truncated exponential (1.92±0.47 ng/mL) or the rectilinear waveform (1.17±0.45 ng/mL). The ascending ramp has a significantly lower defibrillation threshold and at ≈30 J causes 58% less troponin I release than the truncated exponential biphasic shock. Therefore, the shock waveform affects both the defibrillation threshold and the amount of cardiac damage.

  7. The effect of convective boundary condition on MHD mixed convection boundary layer flow over an exponentially stretching vertical sheet

    NASA Astrophysics Data System (ADS)

    Isa, Siti Suzilliana Putri Mohamed; Arifin, Norihan Md.; Nazar, Roslinda; Bachok, Norfifah; Ali, Fadzilah Md

    2017-12-01

    A theoretical study that describes the magnetohydrodynamic mixed convection boundary layer flow with heat transfer over an exponentially stretching sheet with an exponential temperature distribution has been presented herein. This study is conducted in the presence of convective heat exchange at the surface and its surroundings. The system is controlled by viscous dissipation and internal heat generation effects. The governing nonlinear partial differential equations are converted into ordinary differential equations by a similarity transformation. The converted equations are then solved numerically using the shooting method. The results related to skin friction coefficient, local Nusselt number, velocity and temperature profiles are presented for several sets of values of the parameters. The effects of the governing parameters on the features of the flow and heat transfer are examined in detail in this study.

  8. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  9. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  10. Rainbow net analysis of VAXcluster system availability

    NASA Technical Reports Server (NTRS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.

    1991-01-01

    A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.

  11. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    PubMed

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  12. Edge Extraction by an Exponential Function Considering X-ray Transmission Characteristics

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Youp Synn, Sang; Cho, Sung Man; Jong Joo, Won

    2011-04-01

    3-D radiographic methodology has been into the spotlight for quality inspection of mass product or in-service inspection of aging product. To locate a target object in 3-D space, its characteristic contours such as edge length, edge angle, and vertices are very important. In spite of a simple geometry product, it is very difficult to get clear shape contours from a single radiographic image. The image contains scattering noise at the edges and ambiguity coming from X-Ray absorption within the body. This article suggests a concise method to extract whole edges from a single X-ray image. At the edge point of the object, the intensity of the X-ray decays exponentially as the X-ray penetrates the object. Considering this X-Ray decaying property, edges are extracted by using the least square fitting with the control of Coefficient of Determination.

  13. A Semi-Analytical Extraction Method for Interface and Bulk Density of States in Metal Oxide Thin-Film Transistors

    PubMed Central

    Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao

    2018-01-01

    A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492

  14. A UNIVERSAL NEUTRAL GAS PROFILE FOR NEARBY DISK GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bigiel, F.; Blitz, L., E-mail: bigiel@uni-heidelberg.de

    2012-09-10

    Based on sensitive CO measurements from HERACLES and H I data from THINGS, we show that the azimuthally averaged radial distribution of the neutral gas surface density ({Sigma}{sub HI}+ {Sigma}{sub H2}) in 33 nearby spiral galaxies exhibits a well-constrained universal exponential distribution beyond 0.2 Multiplication-Sign r{sub 25} (inside of which the scatter is large) with less than a factor of two scatter out to two optical radii r{sub 25}. Scaling the radius to r{sub 25} and the total gas surface density to the surface density at the transition radius, i.e., where {Sigma}{sub HI} and {Sigma}{sub H2} are equal, as wellmore » as removing galaxies that are interacting with their environment, yields a tightly constrained exponential fit with average scale length 0.61 {+-} 0.06 r{sub 25}. In this case, the scatter reduces to less than 40% across the optical disks (and remains below a factor of two at larger radii). We show that the tight exponential distribution of neutral gas implies that the total neutral gas mass of nearby disk galaxies depends primarily on the size of the stellar disk (influenced to some degree by the great variability of {Sigma}{sub H2} inside 0.2 Multiplication-Sign r{sub 25}). The derived prescription predicts the total gas mass in our sub-sample of 17 non-interacting disk galaxies to within a factor of two. Given the short timescale over which star formation depletes the H{sub 2} content of these galaxies and the large range of r{sub 25} in our sample, there appears to be some mechanism leading to these largely self-similar radial gas distributions in nearby disk galaxies.« less

  15. The Microwave Properties of Simulated Melting Precipitation Particles: Sensitivity to Initial Melting

    NASA Technical Reports Server (NTRS)

    Johnson, B. T.; Olson, W. S.; Skofronick-Jackson, G.

    2016-01-01

    A simplified approach is presented for assessing the microwave response to the initial melting of realistically shaped ice particles. This paper is divided into two parts: (1) a description of the Single Particle Melting Model (SPMM), a heuristic melting simulation for ice-phase precipitation particles of any shape or size (SPMM is applied to two simulated aggregate snow particles, simulating melting up to 0.15 melt fraction by mass), and (2) the computation of the single-particle microwave scattering and extinction properties of these hydrometeors, using the discrete dipole approximation (via DDSCAT), at the following selected frequencies: 13.4, 35.6, and 94.0GHz for radar applications and 89, 165.0, and 183.31GHz for radiometer applications. These selected frequencies are consistent with current microwave remote-sensing platforms, such as CloudSat and the Global Precipitation Measurement (GPM) mission. Comparisons with calculations using variable-density spheres indicate significant deviations in scattering and extinction properties throughout the initial range of melting (liquid volume fractions less than 0.15). Integration of the single-particle properties over an exponential particle size distribution provides additional insight into idealized radar reflectivity and passive microwave brightness temperature sensitivity to variations in size/mass, shape, melt fraction, and particle orientation.

  16. Electrostatic screening in classical Coulomb fluids: exponential or power-law decay or both? An investigation into the effect of dispersion interactions

    NASA Astrophysics Data System (ADS)

    Kjellander, Roland

    2006-04-01

    It is shown that the nature of the non-electrostatic part of the pair interaction potential in classical Coulomb fluids can have a profound influence on the screening behaviour. Two cases are compared: (i) when the non-electrostatic part equals an arbitrary finite-ranged interaction and (ii) when a dispersion r-6 interaction potential is included. A formal analysis is done in exact statistical mechanics, including an investigation of the bridge function. It is found that the Coulombic r-1 and the dispersion r-6 potentials are coupled in a very intricate manner as regards the screening behaviour. The classical one-component plasma (OCP) is a particularly clear example due to its simplicity and is investigated in detail. When the dispersion r-6 potential is turned on, the screened electrostatic potential from a particle goes from a monotonic exponential decay, exp(-κr)/r, to a power-law decay, r-8, for large r. The pair distribution function acquire, at the same time, an r-10 decay for large r instead of the exponential one. There still remains exponentially decaying contributions to both functions, but these contributions turn oscillatory when the r-6 interaction is switched on. When the Coulomb interaction is turned off but the dispersion r-6 pair potential is kept, the decay of the pair distribution function for large r goes over from the r-10 to an r-6 behaviour, which is the normal one for fluids of electroneutral particles with dispersion interactions. Differences and similarities compared to binary electrolytes are pointed out.

  17. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  18. Adult Age Differences and the Role of Cognitive Resources in Perceptual–Motor Skill Acquisition: Application of a Multilevel Negative Exponential Model

    PubMed Central

    Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali

    2010-01-01

    The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985

  19. One parameter family of master equations for logistic growth and BCM theory

    NASA Astrophysics Data System (ADS)

    De Oliveira, L. R.; Castellani, C.; Turchetti, G.

    2015-02-01

    We propose a one parameter family of master equations, for the evolution of a population, having the logistic equation as mean field limit. The parameter α determines the relative weight of linear versus nonlinear terms in the population number n ⩽ N entering the loss term. By varying α from 0 to 1 the equilibrium distribution changes from maximum growth to almost extinction. The former is a Gaussian centered at n = N, the latter is a power law peaked at n = 1. A bimodal distribution is observed in the transition region. When N grows and tends to ∞, keeping the value of α fixed, the distribution tends to a Gaussian centered at n = N whose limit is a delta function corresponding to the stable equilibrium of the mean field equation. The choice of the master equation in this family depends on the equilibrium distribution for finite values of N. The presence of an absorbing state for n = 0 does not change this picture since the extinction mean time grows exponentially fast with N. As a consequence for α close to zero extinction is not observed, whereas when α approaches 1 the relaxation to a power law is observed before extinction occurs. We extend this approach to a well known model of synaptic plasticity, the so called BCM theory in the case of a single neuron with one or two synapses.

  20. A hybrid MD-kMC algorithm for folding proteins in explicit solvent.

    PubMed

    Peter, Emanuel Karl; Shea, Joan-Emma

    2014-04-14

    We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.

  1. Transition from the Unipolar Region to the Sector Zone: Voyager 2, 2013 and 2014

    NASA Astrophysics Data System (ADS)

    Burlaga, L. F.; Ness, N. F.; Richardson, J. D.

    2017-05-01

    We discuss magnetic field and plasma observations of the heliosheath made by Voyager 2 (V2) during 2013 and 2014 near solar maximum. A transition from a unipolar region to a sector zone was observed in the azimuthal angle λ between ˜2012.45 and 2013.82. The distribution of λ was strongly singly peaked at 270^\\circ in the unipolar region and double peaked in the sector zone. The δ-distribution was strongly peaked in the unipolar region and very broad in the sector zone. The distribution of daily averages of the magnetic field strength B was Gaussian in the unipolar region and lognormal in the sector zone. The correlation function of B was exponential with an e-folding time of ˜5 days in both regions. The distribution of hourly increments of B was a Tsallis distribution with nonextensivity parameter q = 1.7 ± 0.04 in the unipolar region and q = 1.44 ± 0.12 in the sector zone. The CR-B relationship qualitatively describes the 2013 observations, but not the 2014 observations. A 40 km s-1 increase in the bulk speed associated with an increase in B near 2013.5 might have been produced by the merging of streams. A “D sheet” (a broad depression in B containing a current sheet moved past V2 from days 320 to 345, 2013. The R- and N-components of the plasma velocity changed across the current sheet.

  2. Postselection technique for quantum channels with applications to quantum cryptography.

    PubMed

    Christandl, Matthias; König, Robert; Renner, Renato

    2009-01-16

    We propose a general method for studying properties of quantum channels acting on an n-partite system, whose action is invariant under permutations of the subsystems. Our main result is that, in order to prove that a certain property holds for an arbitrary input, it is sufficient to consider the case where the input is a particular de Finetti-type state, i.e., a state which consists of n identical and independent copies of an (unknown) state on a single subsystem. Our technique can be applied to the analysis of information-theoretic problems. For example, in quantum cryptography, we get a simple proof for the fact that security of a discrete-variable quantum key distribution protocol against collective attacks implies security of the protocol against the most general attacks. The resulting security bounds are tighter than previously known bounds obtained with help of the exponential de Finetti theorem.

  3. Stochastic Model of Vesicular Sorting in Cellular Organelles

    NASA Astrophysics Data System (ADS)

    Vagne, Quentin; Sens, Pierre

    2018-02-01

    The proper sorting of membrane components by regulated exchange between cellular organelles is crucial to intracellular organization. This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations, considering the relatively small size of many organelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values lead to fast sorting but result in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well-defined sorted compartments but sorting is exponentially slow. Our results suggest an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components and highlight the importance of stochastic effects for the steady-state organization of intracellular compartments.

  4. Accurate description of charge transport in organic field effect transistors using an experimentally extracted density of states

    NASA Astrophysics Data System (ADS)

    Roelofs, W. S. C.; Mathijssen, S. G. J.; Janssen, R. A. J.; de Leeuw, D. M.; Kemerink, M.

    2012-02-01

    The width and shape of the density of states (DOS) are key parameters to describe the charge transport of organic semiconductors. Here we extract the DOS using scanning Kelvin probe microscopy on a self-assembled monolayer field effect transistor (SAMFET). The semiconductor is only a single monolayer which has allowed extraction of the DOS over a wide energy range, pushing the methodology to its fundamental limit. The measured DOS consists of an exponential distribution of deep states with additional localized states on top. The charge transport has been calculated in a generic variable range-hopping model that allows any DOS as input. We show that with the experimentally extracted DOS an excellent agreement between measured and calculated transfer curves is obtained. This shows that detailed knowledge of the density of states is a prerequisite to consistently describe the transfer characteristics of organic field effect transistors.

  5. Cardiac sodium channel Markov model with temperature dependence and recovery from inactivation.

    PubMed Central

    Irvine, L A; Jafri, M S; Winslow, R L

    1999-01-01

    A Markov model of the cardiac sodium channel is presented. The model is similar to the CA1 hippocampal neuron sodium channel model developed by Kuo and Bean (1994. Neuron. 12:819-829) with the following modifications: 1) an additional open state is added; 2) open-inactivated transitions are made voltage-dependent; and 3) channel rate constants are exponential functions of enthalpy, entropy, and voltage and have explicit temperature dependence. Model parameters are determined using a simulated annealing algorithm to minimize the error between model responses and various experimental data sets. The model reproduces a wide range of experimental data including ionic currents, gating currents, tail currents, steady-state inactivation, recovery from inactivation, and open time distributions over a temperature range of 10 degrees C to 25 degrees C. The model also predicts measures of single channel activity such as first latency, probability of a null sweep, and probability of reopening. PMID:10096885

  6. The ATP hydrolysis and phosphate release steps control the time course of force development in rabbit skeletal muscle.

    PubMed

    Sleep, John; Irving, Malcolm; Burton, Kevin

    2005-03-15

    The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two biochemical steps with similar rate constants -- ATP hydrolysis and the release of inorganic phosphate -- both of which combine to control the rate of force development.

  7. How extreme was the October 2015 flood in the Carolinas? An assessment of flood frequency analysis and distribution tails

    NASA Astrophysics Data System (ADS)

    Phillips, R. C.; Samadi, S. Z.; Meadows, M. E.

    2018-07-01

    This paper examines the frequency, distribution tails, and peak-over-threshold (POT) of extreme floods through analysis that centers on the October 2015 flooding in North Carolina (NC) and South Carolina (SC), United States (US). The most striking features of the October 2015 flooding were a short time to peak (Tp) and a multi-hour continuous flood peak which caused intensive and widespread damages to human lives, properties, and infrastructure. The 2015 flooding was produced by a sequence of intense rainfall events which originated from category 4 hurricane Joaquin over a period of four days. Here, the probability distribution and distribution parameters (i.e., location, scale, and shape) of floods were investigated by comparing the upper part of empirical distributions of the annual maximum flood (AMF) and POT with light- to heavy- theoretical tails: Fréchet, Pareto, Gumbel, Weibull, Beta, and Exponential. Specifically, four sets of U.S. Geological Survey (USGS) gauging data from the central Carolinas with record lengths from approximately 65-125 years were used. Analysis suggests that heavier-tailed distributions are in better agreement with the POT and somewhat AMF data than more often used exponential (light) tailed probability distributions. Further, the threshold selection and record length affect the heaviness of the tail and fluctuations of the parent distributions. The shape parameter and its evolution in the period of record play a critical and poorly understood role in determining the scaling of flood response to intense rainfall.

  8. Reactor Statics Module, RS-9: Multigroup Diffusion Program Using an Exponential Acceleration Technique.

    ERIC Educational Resources Information Center

    Macek, Victor C.

    The nine Reactor Statics Modules are designed to introduce students to the use of numerical methods and digital computers for calculation of neutron flux distributions in space and energy which are needed to calculate criticality, power distribution, and fuel burnup for both slow neutron and fast neutron fission reactors. The last module, RS-9,…

  9. Distinguishing Response Conflict and Task Conflict in the Stroop Task: Evidence from Ex-Gaussian Distribution Analysis

    ERIC Educational Resources Information Center

    Steinhauser, Marco; Hubner, Ronald

    2009-01-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were…

  10. K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents

    ERIC Educational Resources Information Center

    Gwanyama, Philip Wagala

    2005-01-01

    The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…

  11. Individual and group dynamics in purchasing activity

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Guo, Jin-Li; Fan, Chao; Liu, Xue-Jiao

    2013-01-01

    As a major part of the daily operation in an enterprise, purchasing frequency is in constant change. Recent approaches on the human dynamics can provide some new insights into the economic behavior of companies in the supply chain. This paper captures the attributes of creation times of purchase orders to an individual vendor, as well as to all vendors, and further investigates whether they have some kind of dynamics by applying logarithmic binning to the construction of distribution plots. It’s found that the former displays a power-law distribution with approximate exponent 2.0, while the latter is fitted by a mixture distribution with both power-law and exponential characteristics. Obviously, two distinctive characteristics are presented for the interval time distribution from the perspective of individual dynamics and group dynamics. Actually, this mixing feature can be attributed to the fitting deviations as they are negligible for individual dynamics, but those of different vendors are cumulated and then lead to an exponential factor for group dynamics. To better describe the mechanism generating the heterogeneity of the purchase order assignment process from the objective company to all its vendors, a model driven by product life cycle is introduced, and then the analytical distribution and the simulation result are obtained, which are in good agreement with the empirical data.

  12. In situ observations of snow particle size distributions over a cold frontal rainband within an extratropical cyclone

    NASA Astrophysics Data System (ADS)

    Yang, Jiefan; Lei, Hengchi

    2016-02-01

    Cloud microphysical properties of a mixed phase cloud generated by a typical extratropical cyclone in the Tongliao area, Inner Mongolia on 3 May 2014, are analyzed primarily using in situ flight observation data. This study is mainly focused on ice crystal concentration, supercooled cloud water content, and vertical distributions of fit parameters of snow particle size distributions (PSDs). The results showed several discrepancies of microphysical properties obtained during two penetrations. During penetration within precipitating cloud, the maximum ice particle concentration, liquid water content, and ice water content were increased by a factor of 2-3 compared with their counterpart obtained during penetration of a nonprecipitating cloud. The heavy rimed and irregular ice crystals obtained by 2D imagery probe as well as vertical distributions of fitting parameters within precipitating cloud show that the ice particles grow during falling via riming and aggregation process, whereas the lightly rimed and pristine ice particles as well as fitting parameters within non-precipitating cloud indicate the domination of sublimation process. During the two cloud penetrations, the PSDs were generally better represented by gamma distributions than the exponential form in terms of the determining coefficient ( R 2). The correlations between parameters of exponential /gamma form within two penetrations showed no obvious differences compared with previous studies.

  13. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    NASA Astrophysics Data System (ADS)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.

  14. The Lunar Rock Size Frequency Distribution from Diviner Infrared Measurements

    NASA Astrophysics Data System (ADS)

    Elder, C. M.; Hayne, P. O.; Piqueux, S.; Bandfield, J.; Williams, J. P.; Ghent, R. R.; Paige, D. A.

    2016-12-01

    Knowledge of the rock size frequency distribution on a planetary body is important for understanding its geologic history and for selecting landing sites. The rock size frequency distribution can be estimated by counting rocks in high resolution images, but most bodies in the solar system have limited areas with adequate coverage. We propose an alternative method to derive and map rock size frequency distributions using multispectral thermal infrared data acquired at multiple times during the night. We demonstrate this new technique for the Moon using data from the Lunar Reconnaissance Orbiter (LRO) Diviner radiometer in conjunction with three dimensional thermal modeling, leveraging the differential cooling rates of different rock sizes. We assume an exponential rock size frequency distribution, which has been shown to yield a good fit to rock populations in various locations on the Moon, Mars, and Earth [2, 3] and solve for the best radiance fits as a function of local time and wavelength. This method presents several advantages: 1) unlike other thermally derived rock abundance techniques, it is sensitive to rocks smaller than the diurnal skin depth; 2) it does not result in apparent decrease in rock abundance at night; and 3) it can be validated using images taken at the lunar surface. This method yields both the fraction of the surface covered in rocks of all sizes and the exponential factor, which defines the rate of drop-off in the exponential function at large rock sizes. We will present maps of both these parameters for the Moon, and provide a geological interpretation. In particular, this method reveals rocks in the lunar highlands that are smaller than previous thermal methods could detect. [1] Bandfield J. L. et al. (2011) JGR, 116, E00H02. [2] Golombek and Rapp (1997) JGR, 102, E2, 4117-4129. [3] Cintala, M.J. and K.M. McBride (1995) NASA Technical Memorandum 104804.

  15. Hazard function analysis for flood planning under nonstationarity

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-05-01

    The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.

  16. A note on free and forced Rossby wave solutions: The case of a straight coast and a channel

    NASA Astrophysics Data System (ADS)

    Graef, Federico

    2017-03-01

    The free Rossby wave (RW) solutions in an ocean with a straight coast when the offshore wavenumber of incident (l1) and reflected (l2) wave are equal or complex are discussed. If l1 = l2 the energy streams along the coast and a uniformly valid solution cannot be found; if l1,2 are complex it yields the sum of an exponentially decaying and growing (away from the coast) Rossby wave. The channel does not admit these solutions as free modes. If the wavenumber vectors of the RWs are perpendicular to the coast, the boundary condition of no normal flow is trivially satisfied and the value of the streamfunction does not need to vanish at the coast. A solution that satisfies Kelvin's theorem of time-independent circulation at the coast is proposed. The forced RW solutions when the ocean's forcing is a single Fourier component are studied. If the forcing is resonant, i.e. a free Rossby wave (RW), the linear response will depend critically on whether the wave carries energy perpendicular to the channel or not. In the first case, the amplitude of the response is linear in the direction normal to the channel, y, and in the second it has a parabolic profile in y. Examples of these solutions are shown for channels with parameters resembling the Mozambique Channel, the Tasman Sea, the Denmark Strait and the English Channel. The solutions for the single coast are unbounded, except when the forcing is a RW trapped against the coast. If the forcing is non-resonant, exponentially decaying or trapped RWs could be excited in the coast and both the exponentially ;decaying; and exponentially ;growing; RW could be excited in the channel.

  17. The Universal Statistical Distributions of the Affinity, Equilibrium Constants, Kinetics and Specificity in Biomolecular Recognition

    PubMed Central

    Zheng, Xiliang; Wang, Jin

    2015-01-01

    We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453

  18. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  19. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  20. Non-Markovian Infection Spread Dramatically Alters the Susceptible-Infected-Susceptible Epidemic Threshold in Networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.; van de Bovenkamp, R.

    2013-03-01

    Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.

  1. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  2. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  3. Probability distribution of financial returns in a model of multiplicative Brownian motion with stochastic diffusion coefficient

    NASA Astrophysics Data System (ADS)

    Silva, Antonio

    2005-03-01

    It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225

  4. The competitiveness versus the wealth of a country.

    PubMed

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y; Stanley, H Eugene

    2012-01-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008-2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers.

  5. Study of velocity and temperature distributions in boundary layer flow of fourth grade fluid over an exponential stretching sheet

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Saeed, Umair Bin; Sultan, Faqiha; Ullah, Saif; Rehman, Abdul

    2018-02-01

    This study deals with the investigation of boundary layer flow of a fourth grade fluid and heat transfer over an exponential stretching sheet. For analyzing two heating processes, namely, (i) prescribed surface temperature (PST), and (ii) prescribed heat flux (PHF), the temperature distribution in a fluid has been considered. The suitable transformations associated with the velocity components and temperature, have been employed for reducing the nonlinear model equation to a system of ordinary differential equations. The flow and temperature fields are revealed by solving these reduced nonlinear equations through an effective analytical method. The important findings in this analysis are to observe the effects of viscoelastic, cross-viscous, third grade fluid, and fourth grade fluid parameters on the constructed analytical expression for velocity profile. Likewise, the heat transfer properties are studied for Prandtl and Eckert numbers.

  6. The competitiveness versus the wealth of a country

    PubMed Central

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y.; Stanley, H. Eugene

    2012-01-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008–2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers. PMID:22997552

  7. Inter-occurrence times and universal laws in finance, earthquakes and genomes

    NASA Astrophysics Data System (ADS)

    Tsallis, Constantino

    2016-07-01

    A plethora of natural, artificial and social systems exist which do not belong to the Boltzmann-Gibbs (BG) statistical-mechanical world, based on the standard additive entropy $S_{BG}$ and its associated exponential BG factor. Frequent behaviors in such complex systems have been shown to be closely related to $q$-statistics instead, based on the nonadditive entropy $S_q$ (with $S_1=S_{BG}$), and its associated $q$-exponential factor which generalizes the usual BG one. In fact, a wide range of phenomena of quite different nature exist which can be described and, in the simplest cases, understood through analytic (and explicit) functions and probability distributions which exhibit some universal features. Universality classes are concomitantly observed which can be characterized through indices such as $q$. We will exhibit here some such cases, namely concerning the distribution of inter-occurrence (or inter-event) times in the areas of finance, earthquakes and genomes.

  8. The competitiveness versus the wealth of a country

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y.; Stanley, H. Eugene

    2012-09-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008-2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers.

  9. Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Kumagai, Yoshiaki

    2010-04-01

    In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.

  10. A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard; Attele, Rohan; Koshak, William

    2011-01-01

    A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.

  11. Well hydraulics in pumping tests with exponentially decayed rates of abstraction in confined aquifers

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen

    2017-05-01

    Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.

  12. Exploring the origins of topological frustration: Design of a minimally frustrated model of fragment B of protein A

    PubMed Central

    Shea, Joan-Emma; Onuchic, José N.; Brooks, Charles L.

    1999-01-01

    Topological frustration in an energetically unfrustrated off-lattice model of the helical protein fragment B of protein A from Staphylococcus aureus was investigated. This Gō-type model exhibited thermodynamic and kinetic signatures of a well-designed two-state folder with concurrent collapse and folding transitions and single exponential kinetics at the transition temperature. Topological frustration is determined in the absence of energetic frustration by the distribution of Fersht φ values. Topologically unfrustrated systems present a unimodal distribution sharply peaked at intermediate φ, whereas highly frustrated systems display a bimodal distribution peaked at low and high φ values. The distribution of φ values in protein A was determined both thermodynamically and kinetically. Both methods yielded a unimodal distribution centered at φ = 0.3 with tails extending to low and high φ values, indicating the presence of a small amount of topological frustration. The contacts with high φ values were located in the turn regions between helices I and II and II and III, intimating that these hairpins are in large part required in the transition state. Our results are in good agreement with all-atom simulations of protein A, as well as lattice simulations of a three- letter code 27-mer (which can be compared with a 60-residue helical protein). The relatively broad unimodal distribution of φ values obtained from the all-atom simulations and that from the minimalist model for the same native fold suggest that the structure of the transition state ensemble is determined mostly by the protein topology and not energetic frustration. PMID:10535953

  13. Modeling stochastic noise in gene regulatory systems

    PubMed Central

    Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung

    2014-01-01

    The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368

  14. Geomorphic effectiveness of long profile shape and role of inherent geological controls, Ganga River Basin, India

    NASA Astrophysics Data System (ADS)

    Sonam, Sonam; Jain, Vikrant

    2017-04-01

    River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.

  15. The stationary non-equilibrium plasma of cosmic-ray electrons and positrons

    NASA Astrophysics Data System (ADS)

    Tomaschitz, Roman

    2016-06-01

    The statistical properties of the two-component plasma of cosmic-ray electrons and positrons measured by the AMS-02 experiment on the International Space Station and the HESS array of imaging atmospheric Cherenkov telescopes are analyzed. Stationary non-equilibrium distributions defining the relativistic electron-positron plasma are derived semi-empirically by performing spectral fits to the flux data and reconstructing the spectral number densities of the electronic and positronic components in phase space. These distributions are relativistic power-law densities with exponential cutoff, admitting an extensive entropy variable and converging to the Maxwell-Boltzmann or Fermi-Dirac distributions in the non-relativistic limit. Cosmic-ray electrons and positrons constitute a classical (low-density high-temperature) plasma due to the low fugacity in the quantized partition function. The positron fraction is assembled from the flux densities inferred from least-squares fits to the electron and positron spectra and is subjected to test by comparing with the AMS-02 flux ratio measured in the GeV interval. The calculated positron fraction extends to TeV energies, predicting a broad spectral peak at about 1 TeV followed by exponential decay.

  16. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    NASA Astrophysics Data System (ADS)

    Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo

    2010-10-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.

  17. Effects of topologies on signal propagation in feedforward networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  18. Effects of topologies on signal propagation in feedforward networks.

    PubMed

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  19. Quenching of highly vibrationally excited pyrimidine by collisions with CO2

    NASA Astrophysics Data System (ADS)

    Johnson, Jeremy A.; Duffin, Andrew M.; Hom, Brian J.; Jackson, Karl E.; Sevy, Eric T.

    2008-02-01

    Relaxation of highly vibrationally excited pyrimidine (C4N2H4) by collisions with carbon dioxide has been investigated using diode laser transient absorption spectroscopy. Vibrationally hot pyrimidine (E'=40635cm-1) was prepared by 248-nm excimer laser excitation, followed by rapid radiationless relaxation to the ground electronic state. The nascent rotational population distribution (J=58-80) of the 0000 ground state of CO2 resulting from collisions with hot pyrimidine was probed at short times following the excimer laser pulse. Doppler spectroscopy was used to measure the CO2 recoil velocity distribution for J =58-80 of the 0000 state. Rate constants and probabilities for collisions populating these CO2 rotational states were determined. The measured energy transfer probabilities, indexed by final bath state, were resorted as a function of ΔE to create the energy transfer distribution function, P(E,E'), from E'-E˜1300-7000cm-1. P(E,E') is fitted to a single exponential and a biexponential function to determine the average energy transferred in a single collision between pyrimidine and CO2 and parameters that can be compared to previously studied systems using this technique, pyrazine/CO2, C6F6/CO2, and methylpyrazine/CO2. P(E,E') parameters for these four systems are also compared to various molecular properties of the donor molecules. Finally, P(E,E') is analyzed in the context of two models, one which suggests that the shape of P(E,E') is primarily determined by the low-frequency out-of-plane donor vibrational modes and one which suggests that the shape of P(E,E') can be determined by how the donor molecule final density of states changes with ΔE.

  20. A study of personal income distributions in Australia and Italy

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Yakovenko, Victor

    2006-03-01

    The study of income distribution has a long history. A century ago, the Italian physicist and economist Pareto proposed that income distribution obeys a universal power law, valid for all time and countries. Subsequent studies proved that only the top 1-3% of the population follow a power law. For USA, the rest 97-99% of the population follow the exponential distribution [1]. We present the results of a similar study for Australia and Italy. [1] A. C. Silva and V. M. Yakovenko, Europhys. Lett.69, 304 (2005).

  1. Electromagnetic wave scattering from rough terrain

    NASA Astrophysics Data System (ADS)

    Papa, R. J.; Lennon, J. F.; Taylor, R. L.

    1980-09-01

    This report presents two aspects of a program designed to calculate electromagnetic scattering from rough terrain: (1) the use of statistical estimation techniques to determine topographic parameters and (2) the results of a single-roughness-scale scattering calculation based on those parameters, including comparison with experimental data. In the statistical part of the present calculation, digitized topographic maps are used to generate data bases for the required scattering cells. The application of estimation theory to the data leads to the specification of statistical parameters for each cell. The estimated parameters are then used in a hypothesis test to decide on a probability density function (PDF) that represents the height distribution in the cell. Initially, the formulation uses a single observation of the multivariate data. A subsequent approach involves multiple observations of the heights on a bivariate basis, and further refinements are being considered. The electromagnetic scattering analysis, the second topic, calculates the amount of specular and diffuse multipath power reaching a monopulse receiver from a pulsed beacon positioned over a rough Earth. The program allows for spatial inhomogeneities and multiple specular reflection points. The analysis of shadowing by the rough surface has been extended to the case where the surface heights are distributed exponentially. The calculated loss of boresight pointing accuracy attributable to diffuse multipath is then compared with the experimental results. The extent of the specular region, the use of localized height variations, and the effect of the azimuthal variation in power pattern are all assessed.

  2. The coherent interlayer resistance of a single, rotated interface between two stacks of AB graphite

    NASA Astrophysics Data System (ADS)

    Habib, K. M. Masum; Sylvia, Somaia S.; Ge, Supeng; Neupane, Mahesh; Lake, Roger K.

    2013-12-01

    The coherent, interlayer resistance of a misoriented, rotated interface between two stacks of AB graphite is determined for a variety of misorientation angles. The quantum-resistance of the ideal AB stack is on the order of 1 to 10 mΩ μm2. For small rotation angles, the coherent interlayer resistance exponentially approaches the ideal quantum resistance at energies away from the charge neutrality point. Over a range of intermediate angles, the resistance increases exponentially with cell size for minimum size unit cells. Larger cell sizes, of similar angles, may not follow this trend. The energy dependence of the interlayer transmission is described.

  3. Development of a methodology to evaluate material accountability in pyroprocess

    NASA Astrophysics Data System (ADS)

    Woo, Seungmin

    This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).

  4. Learning Search Control Knowledge for Deep Space Network Scheduling

    NASA Technical Reports Server (NTRS)

    Gratch, Jonathan; Chien, Steve; DeJong, Gerald

    1993-01-01

    While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.

  5. Inland empire logistics GIS mapping project.

    DOT National Transportation Integrated Search

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  6. A new formula for normal tissue complication probability (NTCP) as a function of equivalent uniform dose (EUD).

    PubMed

    Luxton, Gary; Keall, Paul J; King, Christopher R

    2008-01-07

    To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.

  7. Shear-induced conformational ordering, relaxation, and crystallization of isotactic polypropylene.

    PubMed

    An, Haining; Li, Xiangyang; Geng, Yong; Wang, Yunlong; Wang, Xiao; Li, Liangbin; Li, Zhongming; Yang, Chuanlu

    2008-10-02

    The shear-induced coil-helix transition of isotactic polypropylene (iPP) has been studied with time-resolved Fourier transform infrared spectroscopy at various temperatures. The effects of temperature, shear rate, and strain on the coil-helix transition were studied systematically. The induced conformational order increases with the shear rate and strain. A threshold of shear strain is required to induce conformational ordering. High temperature reduces the effect of shear on the conformational order, though a simple correlation was not found. Following the shear-induced conformational ordering, relaxation of helices occurs, which follows the first-order exponential decay at temperatures well above the normal melting point of iPP. The relaxation time versus temperature is fitted with an Arrhenius law, which generates an activation energy of 135 kJ/mol for the helix-coil transition of iPP. At temperatures around the normal melting point, two exponential decays are needed to fit well on the relaxation kinetic of helices. This suggests that two different states of helices are induced by shear: (i) isolated single helices far away from each other without interactions, which have a fast relaxation kinetic; (ii) aggregations of helices or helical bundles with strong interactions among each other, which have a much slower relaxation process. The helical bundles are assumed to be the precursors of nuclei for crystallization. The different helix concentrations and distributions are the origin of the three different processes of crystallization after shear. The correlation between the shear-induced conformational order and crystallization is discussed.

  8. Anomalous Diffusion in a Trading Model

    NASA Astrophysics Data System (ADS)

    Khidzir, Sidiq Mohamad; Wan Abdullah, Wan Ahmad Tajuddin

    2009-07-01

    The result of the trading model by Chakrabarti et al. [1] is the wealth distribution with a mixed exponential and power law distribution. Based on the motivation of studying the dynamics behind the flow of money similar to work done by Brockmann [2, 3] we track the flow of money in this trading model to observe anomalous diffusion in the form of long waiting times and Levy Flights.

  9. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  10. Evaluation of leaf litter leaching kinetics through commonly-used mathematical models

    NASA Astrophysics Data System (ADS)

    Montoya, J. V.; Bastianoni, A.; Mendez, C.; Paolini, J.

    2012-04-01

    Leaching is defined as the abiotic process by which soluble compounds of the litter are released into the water. Most studies dealing with leaf litter breakdown and leaching kinetics apply the single exponential decay model since it corresponds well with the understanding of the biology of decomposition. However, during leaching important mass losses occur and mathematical models often fail in describing this process adequately. During the initial hours of leaching leaf litter experience high decay rates which are not properly modelled. Adjusting leaching losses to mathematical models has not been investigated thoroughly and the use of models assuming constant decay rates leads to inappropriate assessments of leaching kinetics. We aim to describe, assess, and compare different leaching kinetics models fitted to leaf litter mass losses from six Neotropical riparian forest species. Leaf litter from each species was collected in the lower reaches of San Miguel stream in Northern Venezuela. Air-dried leaves from each species were incubated in 250 ml of water in the dark at room temperature. At 1h, 6h, 1d, 2d, 4d, 8d and 15d, three jars were removed from the assay in a no-replacement experimental design. At each time leaves from each jar were removed and oven-dried. Afterwards, dried up leaves were weighed and remaining dry mass was determined and expressed as ash-free dry mass. Mass losses of leaf litter showed steep declines for the first two days followed by a steady decrease in mass loss. Data was fitted to three different models: single-exponential, power and rational. Our results showed that the mass loss predicted with the single-exponential model did not reflect the real data at any stage of the leaching process. The power model showed a better adjustment, but fails predicting successfully the behavior during leaching's early stages. To evaluate the performance of our models we used three criteria: Adj-R2, Akaike's Information Criteria (AIC), and residual distribution. Higher Adj-R2 were obtained for the power and the rational-type models. However, when AIC and residuals distribution were used, the only model that could satisfactory predict the behavior of our dataset was the rational-type. Even if the Adj-R2 was higher for some species when using the power model compared to the rational-type; our results showed that this criterion alone cannot demonstrate the predicting performance of any model. Usually Adj-R2 is used when assessing the goodness of fit for any mathematical model disregarding the fact that a good Adj-R2 could be obtained even when statistical assumptions required for the validity of the model are not satisfied. Our results showed that sampling at the initial stages of leaching is necessary to adequately describe this process. We also provided evidence that using traditional mathematical models is not the best option to evaluate leaching kinetics because of its mathematical inability to properly describe the abrupt changes that occur during the early stages of leaching. We also found useful applying different criteria to evaluate the goodness-of-fit and performance of any model considered taking into account both statistical and biological meaning of the results.

  11. Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Beier, Hope T.

    2016-03-01

    Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.

  12. Engineering of an ultra-thin molecular superconductor by charge transfer

    DOEpatents

    Hla, Saw Wai; Hassanien, Abdelrahim; Kendal, Clark

    2016-06-07

    A method of forming a superconductive device of a single layer of (BETS).sub.2GaCl.sub.4 molecules on a substrate surface which displays a superconducting gap that increases exponentially with the length of the molecular chain is provided.

  13. Quasiclassical treatment of the Auger effect in slow ion-atom collisions

    NASA Astrophysics Data System (ADS)

    Frémont, F.

    2017-09-01

    A quasiclassical model based on the resolution of Hamilton equations of motion is used to get evidence for Auger electron emission following double-electron capture in 150-keV N e10 ++He collisions. Electron-electron interaction is taken into account during the collision by using pure Coulombic potential. To make sure that the helium target is stable before the collision, phenomenological potentials for the electron-nucleus interactions that simulate the Heisenberg principle are included in addition to the Coulombic potential. First, single- and double-electron captures are determined and compared with previous experiments and theories. Then, integration time evolution is calculated for autoionizing and nonautoionizing double capture. In contrast with single capture, the number of electrons originating from autoionization slowly increases with integration time. A fit of the calculated cross sections by means of an exponential function indicates that the average lifetime is 4.4 ×10-3a .u . , in very good agreement with the average lifetime deduced from experiments and a classical model introduced to calculate individual angular momentum distributions. The present calculation demonstrates the ability of classical models to treat the Auger effect, which is a pure quantum effect.

  14. Time-resolved fluorescence of thioredoxin single-tryptophan mutants: modeling experimental results with minimum perturbation mapping

    NASA Astrophysics Data System (ADS)

    Silva, Norberto D., Jr.; Haydock, Christopher; Prendergast, Franklyn G.

    1994-08-01

    The time-resolved fluorescence decay of single tryptophan (Trp) proteins is typically described using either a distribution of lifetimes or a sum of two or more exponential terms. A possible interpretation for this fluorescence decay heterogeneity is the existence of different isomeric conformations of Trp about its (chi) +1) and (chi) +2) dihedral angles. Are multiple Trp conformations compatible with the remainder of the protein in its crystallographic configuration or do they require repacking of neighbor side chains? It is conceivable that isomers of the neighbor side chains interconvert slowly on the fluorescence timescale and contribute additional lifetime components to the fluorescence intensity. We have explored this possibility by performing minimum perturbation mapping simulations of Trp 28 and Trp 31 in thioredoxin (TRX) using CHARMm 22. Mappings of Trp 29 and Trp 31 give the TRX Trp residue energy landscape as a function of (chi) +1) and (chi) +2) dihedral angles. Time-resolved fluorescence intensity and anisotropy decay of mutant TRX (W28F and W31F) are measured and interpreted in light of the above simulations. Relevant observables, like order parameters and isomerization rates, can be derived from the minimum perturbation maps and compared with experiment.

  15. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  16. Droplet size and velocity distributions for spray modelling

    NASA Astrophysics Data System (ADS)

    Jones, D. P.; Watkins, A. P.

    2012-01-01

    Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.

  17. Impact of nonzero boresight pointing error on ergodic capacity of MIMO FSO communication systems.

    PubMed

    Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Beatriz; Castillo-Vázquez, Carmen

    2016-02-22

    A thorough investigation of the impact of nonzero boresight pointing errors on the ergodic capacity of multiple-input/multiple-output (MIMO) free-space optical (FSO) systems with equal gain combining (EGC) reception under different turbulence models, which are modeled as statistically independent, but not necessarily identically distributed (i.n.i.d.) is addressed in this paper. Novel closed-form asymptotic expressions at high signal-to-noise ratio (SNR) for the ergodic capacity of MIMO FSO systems are derived when different geometric arrangements of the receive apertures at the receiver are considered in order to reduce the effect of nonzero inherent boresight displacement, which is inevitably present when more than one receive aperture is considered. As a result, the asymptotic ergodic capacity of MIMO FSO systems is evaluated over log-normal (LN), gamma-gamma (GG) and exponentiated Weibull (EW) atmospheric turbulence in order to study different turbulence conditions, different sizes of receive apertures as well as different aperture averaging conditions. It is concluded that the use of single-input/multiple-output (SIMO) and MIMO techniques can significantly increase the ergodic capacity respect to the direct path link when the inherent boresight displacement takes small values, i.e. when the spacing among receive apertures is not too big. The effect of nonzero additional boresight errors, which is due to the thermal expansion of the building, is evaluated in multiple-input/single-output (MISO) and single-input/single-output (SISO) FSO systems. Simulation results are further included to confirm the analytical results.

  18. Taming active turbulence with patterned soft interfaces.

    PubMed

    Guillamat, P; Ignés-Mullol, J; Sagués, F

    2017-09-15

    Active matter embraces systems that self-organize at different length and time scales, often exhibiting turbulent flows apparently deprived of spatiotemporal coherence. Here, we use a layer of a tubulin-based active gel to demonstrate that the geometry of active flows is determined by a single length scale, which we reveal in the exponential distribution of vortex sizes of active turbulence. Our experiments demonstrate that the same length scale reemerges as a cutoff for a scale-free power law distribution of swirling laminar flows when the material evolves in contact with a lattice of circular domains. The observed prevalence of this active length scale can be understood by considering the role of the topological defects that form during the spontaneous folding of microtubule bundles. These results demonstrate an unexpected strategy for active systems to adapt to external stimuli, and provide with a handle to probe the existence of intrinsic length and time scales.Active nematics consist of self-driven components that develop orientational order and turbulent flow. Here Guillamat et al. investigate an active nematic constrained in a quasi-2D geometrical setup and show that there exists an intrinsic length scale that determines the geometry in all forcing regimes.

  19. HPC-Microgels: New Look at Structure and Dynamics

    NASA Astrophysics Data System (ADS)

    McKenna, John; Streletzky, Kiril; Mohieddine, Rami

    2006-10-01

    Issues remain unresolved in targeted chemotherapy including: an inability to effectively target cancerous tissue, the loss of low molecular weight medicines to the RES system, the high cytotoxicity of currently used drug carriers, and the inability to control the release of medicines upon arrival to the target. Hydroxy-propyl cellulose(HPC) microgels may be able to surmount these obstacles. HPC is a high molecular weight polymer with low cytotoxicity and a critical temperature around 41C. We cross-linked HPC polymer chains to produce microgel nanoparticles and studied their structure and dynamics using Dynamic Light Scattering spectroscopy. The complex nature of the fluid and large size distribution of the particles renders typical characterization algorithm CONTIN ineffective and inconsistent. Instead, the particles spectra have been fit to a sum of stretched exponentials. Each term offers three parameters for analysis and represents a single mode. The results of this analysis show that the microgels undergo a multi to uni-modal transition around 41C. The CONTIN size distribution analysis shows similar results, but these come with much less consistency and resolution. During the phase transition it is found that the microgel particles actually shrink. This property might be particularly useful for controlled drug delivery and release.

  20. [Spatial variability and evaluation of soil heavy metal contamination in the urban-transect of Shanghai].

    PubMed

    Liu, Yun-Long; Zhang, Li-Jia; Han, Xiao-Fei; Zhuang, Teng-Fei; Shi, Zhen-Xiang; Lu, Xiao-Zhe

    2012-02-01

    Soil heavy metal concentrations along the typical urban-transect in Shanghai were analyzed to indicate the effect of urbanization and industrialization on soil environment quality. Spatial variation structure and distribution of 5 heavy metals (Cu, Cr, Mn, Pb and Zn) in the top soil of urban-transect were analyzed. The single pollution index and the composite pollution index were used to evaluate the soil heavy metal pollution. The results showed that the average concentrations of the Cu, Pb, Zn, Cr, Mn were 27.80, 28.86, 99.36, 87.72, 556.97 mg x kg(-1), respectively. Cu, Cr, Mn, Pb and Zn were medium in variability, Mn was distributed lognormally, while Cu, Cr, Pb and Zn were distributed normally. The results of semivariance analysis showed that Mn was fit for the exponential model, Cr, Pb, Cu and Zn were fit for the linear model. The spatial distribution maps of heavy metal content of the topsoil in this city-transect were produced by means of the universal kriging interpolation. Cu was spatially distributed in ribbon, Cr and Mn were distributed in island, while the spatial distribution of Pb and Zn showed the mixed characteristic of ribbon and island. With the result of soil pollution evaluation, it showed that the pollution of Cr, Zn and Pb was relatively severe. Cr, Zn, Pb, Mn and Cu were significantly correlated, and heavy metal co-contamination existed in soil. Difference of soil heavy metals pollution along "Urban-suburban-rural" was obvious, the special variation of heavy metal concentrations in the soil closely related to the degree of industrialization and urbanization of the city.

  1. Estimating time since infection in early homogeneous HIV-1 samples using a poisson model

    PubMed Central

    2010-01-01

    Background The occurrence of a genetic bottleneck in HIV sexual or mother-to-infant transmission has been well documented. This results in a majority of new infections being homogeneous, i.e., initiated by a single genetic strain. Early after infection, prior to the onset of the host immune response, the viral population grows exponentially. In this simple setting, an approach for estimating evolutionary and demographic parameters based on comparison of diversity measures is a feasible alternative to the existing Bayesian methods (e.g., BEAST), which are instead based on the simulation of genealogies. Results We have devised a web tool that analyzes genetic diversity in acutely infected HIV-1 patients by comparing it to a model of neutral growth. More specifically, we consider a homogeneous infection (i.e., initiated by a unique genetic strain) prior to the onset of host-induced selection, where we can assume a random accumulation of mutations. Previously, we have shown that such a model successfully describes about 80% of sexual HIV-1 transmissions provided the samples are drawn early enough in the infection. Violation of the model is an indicator of either heterogeneous infections or the initiation of selection. Conclusions When the underlying assumptions of our model (homogeneous infection prior to selection and fast exponential growth) are met, we are under a very particular scenario for which we can use a forward approach (instead of backwards in time as provided by coalescent methods). This allows for more computationally efficient methods to derive the time since the most recent common ancestor. Furthermore, the tool performs statistical tests on the Hamming distance frequency distribution, and outputs summary statistics (mean of the best fitting Poisson distribution, goodness of fit p-value, etc). The tool runs within minutes and can readily accommodate the tens of thousands of sequences generated through new ultradeep pyrosequencing technologies. The tool is available on the LANL website. PMID:20973976

  2. Compactness and robustness: Applications in the solution of integral equations for chemical kinetics and electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Yajun

    This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of compact operators, we outline the geometric and physical conditions that guarantee a robust solution to the light scattering problem, and devise an asymptotic solution to the Born equation of electromagnetic scattering for arbitrarily shaped dielectric in a non-perturbative manner.

  3. ‘Sleepy’ inward rectifier channels in guinea-pig cardiomyocytes are activated only during strong hyperpolarization

    PubMed Central

    Liu, Gong Xin; Daut, Jürgen

    2002-01-01

    K+ channels of isolated guinea-pig cardiomyocytes were studied using the patch-clamp technique. At transmembrane potentials between −120 and −220 mV we observed inward currents through an apparently novel channel. The novel channel was strongly rectifying, no outward currents could be recorded. Between −200 and −160 mV it had a slope conductance of 42.8 ± 3.0 pS (s.d.; n = 96). The open probability (Po) showed a sigmoid voltage dependence and reached a maximum of 0.93 at −200 mV, half-maximal activation was approximately −150 mV. The voltage dependence of Po was not affected by application of 50 μm isoproterenol. The open-time distribution could be described by a single exponential function, the mean open time ranged between 73.5 ms at −220 mV and 1.4 ms at −160 mV. At least two exponential components were required to fit the closed time distribution. Experiments with different external Na+, K+ and Cl− concentrations suggested that the novel channel is K+ selective. Extracellular Ba2+ ions gave rise to a voltage-dependent reduction in Po by inducing long closed states; Cs+ markedly reduced mean open time at −200 mV. In cell-attached recordings the novel channel frequently converted to a classical inward rectifier channel, and vice versa. This conversion was not voltage dependent. After excision of the patch, the novel channel always converted to a classical inward rectifier channel within 0–3 min. This conversion was not affected by intracellular Mg2+, phosphatidylinositol (4,5)-bisphosphate or spermine. Taken together, our findings suggest that the novel K+ channel represents a different ‘mode’ of the classical inward rectifier channel in which opening occurs only at very negative potentials. PMID:11897847

  4. Exponential stabilization of magnetoelastic waves in a Mindlin-Timoshenko plate by localized internal damping

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-08-01

    This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.

  5. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  6. Parabolic replicator dynamics and the principle of minimum Tsallis information gain

    PubMed Central

    2013-01-01

    Background Non-linear, parabolic (sub-exponential) and hyperbolic (super-exponential) models of prebiological evolution of molecular replicators have been proposed and extensively studied. The parabolic models appear to be the most realistic approximations of real-life replicator systems due primarily to product inhibition. Unlike the more traditional exponential models, the distribution of individual frequencies in an evolving parabolic population is not described by the Maximum Entropy (MaxEnt) Principle in its traditional form, whereby the distribution with the maximum Shannon entropy is chosen among all the distributions that are possible under the given constraints. We sought to identify a more general form of the MaxEnt principle that would be applicable to parabolic growth. Results We consider a model of a population that reproduces according to the parabolic growth law and show that the frequencies of individuals in the population minimize the Tsallis relative entropy (non-additive information gain) at each time moment. Next, we consider a model of a parabolically growing population that maintains a constant total size and provide an “implicit” solution for this system. We show that in this case, the frequencies of the individuals in the population also minimize the Tsallis information gain at each moment of the ‘internal time” of the population. Conclusions The results of this analysis show that the general MaxEnt principle is the underlying law for the evolution of a broad class of replicator systems including not only exponential but also parabolic and hyperbolic systems. The choice of the appropriate entropy (information) function depends on the growth dynamics of a particular class of systems. The Tsallis entropy is non-additive for independent subsystems, i.e. the information on the subsystems is insufficient to describe the system as a whole. In the context of prebiotic evolution, this “non-reductionist” nature of parabolic replicator systems might reflect the importance of group selection and competition between ensembles of cooperating replicators. Reviewers This article was reviewed by Viswanadham Sridhara (nominated by Claus Wilke), Puushottam Dixit (nominated by Sergei Maslov), and Nick Grishin. For the complete reviews, see the Reviewers’ Reports section. PMID:23937956

  7. Bimodal spatial distribution of macular pigment: evidence of a gender relationship

    NASA Astrophysics Data System (ADS)

    Delori, François C.; Goger, Douglas G.; Keilhauer, Claudia; Salvetti, Paola; Staurenghi, Giovanni

    2006-03-01

    The spatial distribution of the optical density of the human macular pigment measured by two-wavelength autofluorescence imaging exhibits in over half of the subjects an annulus of higher density superimposed on a central exponential-like distribution. This annulus is located at about 0.7° from the fovea. Women have broader distributions than men, and they are more likely to exhibit this bimodal distribution. Maxwell's spot reported by subjects matches the measured distribution of their pigment. Evidence that the shape of the foveal depression may be gender related leads us to hypothesize that differences in macular pigment distribution are related to anatomical differences in the shape of the foveal depression.

  8. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Bandi, Mahesh M.; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  9. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence.

    PubMed

    Bandi, Mahesh M; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig's XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  10. Mathematical Aspects of Reliability-Centered Maintenance

    DTIC Science & Technology

    1977-01-01

    exponential distribu~tion, .whose parameter (-hazard rate) can be realistically estimated., La ma SuWiaWItib~ This distribution is als4.. frequently...statistical methods to the study ýf hysicA3 reality was beset with .philosc\\phicsl problems arising from the irrefutable observacion that there isibut one...STATISTICS, 2nd ed. New York: John Wiley & Sons ; 1954. 5. Kolmogorov, A. Interpolation und Extrapolation von stationwren zuf-lligen Folgen. BULL. DE

  11. Context-Sensitive Detection of Local Community Structure

    DTIC Science & Technology

    2011-04-01

    characters in the Victor Hugo novel Les Miserables (lesmis).[77 vertices, 254 edges] [Knu93]. • The neural network of the nematode C. Elegans (c.elegans...adjectives and nouns in the Novel David Cop- perfield by Charles Dickens.[112 vertices, 425 edges] [New06]. • Les Miserables . Co-appearance network of...exponential distribution. The degree distributions of the Network Science, Les Miserables , and Word Adjacencies networks display a similar heavy tail. By

  12. Complexity and Productivity Differentiation Models of Metallogenic Indicator Elements in Rocks and Supergene Media Around Daijiazhuang Pb-Zn Deposit in Dangchang County, Gansu Province

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jin-zhong, E-mail: viewsino@163.com; Yao, Shu-zhen; Zhang, Zhong-ping

    2013-03-15

    With the help of complexity indices, we quantitatively studied multifractals, frequency distributions, and linear and nonlinear characteristics of geochemical data for exploration of the Daijiazhuang Pb-Zn deposit. Furthermore, we derived productivity differentiation models of elements from thermodynamics and self-organized criticality of metallogenic systems. With respect to frequency distributions and multifractals, only Zn in rocks and most elements except Sb in secondary media, which had been derived mainly from weathering and alluviation, exhibit nonlinear distributions. The relations of productivity to concentrations of metallogenic elements and paragenic elements in rocks and those of elements strongly leached in secondary media can be seenmore » as linear addition of exponential functions with a characteristic weak chaos. The relations of associated elements such as Mo, Sb, and Hg in rocks and other elements in secondary media can be expressed as an exponential function, and the relations of one-phase self-organized geological or metallogenic processes can be represented by a power function, each representing secondary chaos or strong chaos. For secondary media, exploration data of most elements should be processed using nonlinear mathematical methods or should be transformed to linear distributions before processing using linear mathematical methods.« less

  13. Fluctuations in Wikipedia access-rate and edit-event data

    NASA Astrophysics Data System (ADS)

    Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev

    2012-12-01

    Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.

  14. Beyond Word Frequency: Bursts, Lulls, and Scaling in the Temporal Distributions of Words

    PubMed Central

    Altmann, Eduardo G.; Pierrehumbert, Janet B.; Motter, Adilson E.

    2009-01-01

    Background Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics. PMID:19907645

  15. Cluster-cluster aggregation with particle replication and chemotaxy: a simple model for the growth of animal cells in culture

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Martins, M. L.

    2010-09-01

    Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.

  16. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  17. Bayesian analysis of the kinetics of quantal transmitter secretion at the neuromuscular junction.

    PubMed

    Saveliev, Anatoly; Khuzakhmetova, Venera; Samigullin, Dmitry; Skorinkin, Andrey; Kovyazina, Irina; Nikolsky, Eugeny; Bukharaeva, Ellya

    2015-10-01

    The timing of transmitter release from nerve endings is considered nowadays as one of the factors determining the plasticity and efficacy of synaptic transmission. In the neuromuscular junction, the moments of release of individual acetylcholine quanta are related to the synaptic delays of uniquantal endplate currents recorded under conditions of lowered extracellular calcium. Using Bayesian modelling, we performed a statistical analysis of synaptic delays in mouse neuromuscular junction with different patterns of rhythmic nerve stimulation and when the entry of calcium ions into the nerve terminal was modified. We have obtained a statistical model of the release timing which is represented as the summation of two independent statistical distributions. The first of these is the exponentially modified Gaussian distribution. The mixture of normal and exponential components in this distribution can be interpreted as a two-stage mechanism of early and late periods of phasic synchronous secretion. The parameters of this distribution depend on both the stimulation frequency of the motor nerve and the calcium ions' entry conditions. The second distribution was modelled as quasi-uniform, with parameters independent of nerve stimulation frequency and calcium entry. Two different probability density functions for the distribution of synaptic delays suggest at least two independent processes controlling the time course of secretion, one of them potentially involving two stages. The relative contribution of these processes to the total number of mediator quanta released depends differently on the motor nerve stimulation pattern and on calcium ion entry into nerve endings.

  18. Lévy flight with absorption: A model for diffusing diffusivity with long tails

    NASA Astrophysics Data System (ADS)

    Jain, Rohit; Sebastian, K. L.

    2017-03-01

    We consider diffusion of a particle in rearranging environment, so that the diffusivity of the particle is a stochastic function of time. In our previous model of "diffusing diffusivity" [Jain and Sebastian, J. Phys. Chem. B 120, 3988 (2016), 10.1021/acs.jpcb.6b01527], it was shown that the mean square displacement of particle remains Fickian, i.e., ∝T at all times, but the probability distribution of particle displacement is not Gaussian at all times. It is exponential at short times and crosses over to become Gaussian only in a large time limit in the case where the distribution of D in that model has a steady state limit which is exponential, i.e., πe(D ) ˜e-D /D0 . In the present study, we model the diffusivity of a particle as a Lévy flight process so that D has a power-law tailed distribution, viz., πe(D ) ˜D-1 -α with 0 <α <1 . We find that in the short time limit, the width of displacement distribution is proportional to √{T }, implying that the diffusion is Fickian. But for long times, the width is proportional to T1 /2 α which is a characteristic of anomalous diffusion. The distribution function for the displacement of the particle is found to be a symmetric stable distribution with a stability index 2 α which preserves its shape at all times.

  19. Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.

    PubMed

    Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M

    2017-05-16

    Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.

  20. Stochastic processes in the social sciences: Markets, prices and wealth distributions

    NASA Astrophysics Data System (ADS)

    Romero, Natalia E.

    The present work uses statistical mechanics tools to investigate the dynamics of markets, prices, trades and wealth distribution. We studied the evolution of market dynamics in different stages of historical development by analyzing commodity prices from two distinct periods ancient Babylon, and medieval and early modern England. We find that the first-digit distributions of both Babylon and England commodity prices follow Benfords law, indicating that the data represent empirical observations typically arising from a free market. Further, we find that the normalized prices of both Babylon and England agricultural commodities are characterized by stretched exponential distributions, and exhibit persistent correlations of a power law type over long periods of up to several centuries, in contrast to contemporary markets. Our findings suggest that similar market interactions may underlie the dynamics of ancient agricultural commodity prices, and that these interactions may remain stable across centuries. To further investigate the dynamics of markets we present the analogy between transfers of money between individuals and the transfer of energy through particle collisions by means of the kinetic theory of gases. We introduce a theoretical framework of how the micro rules of trading lead to the emergence of income and wealth distribution. Particularly, we study the effects of different types of distribution of savings/investments among individuals in a society and different welfare/subsidies redistribution policies. Results show that while considering savings propensities the models approach empirical distributions of wealth quite well the effect of redistribution better captures specific features of the distributions which earlier models failed to do; moreover the models still preserve the exponential decay observed in empirical income distributions reported by tax data and surveys.

Top