Sample records for time dependent distribution

  1. Transit-time and age distributions for nonlinear time-dependent compartmental systems.

    PubMed

    Metzler, Holger; Müller, Markus; Sierra, Carlos A

    2018-02-06

    Many processes in nature are modeled using compartmental systems (reservoir/pool/box systems). Usually, they are expressed as a set of first-order differential equations describing the transfer of matter across a network of compartments. The concepts of age of matter in compartments and the time required for particles to transit the system are important diagnostics of these models with applications to a wide range of scientific questions. Until now, explicit formulas for transit-time and age distributions of nonlinear time-dependent compartmental systems were not available. We compute densities for these types of systems under the assumption of well-mixed compartments. Assuming that a solution of the nonlinear system is available at least numerically, we show how to construct a linear time-dependent system with the same solution trajectory. We demonstrate how to exploit this solution to compute transit-time and age distributions in dependence on given start values and initial age distributions. Furthermore, we derive equations for the time evolution of quantiles and moments of the age distributions. Our results generalize available density formulas for the linear time-independent case and mean-age formulas for the linear time-dependent case. As an example, we apply our formulas to a nonlinear and a linear version of a simple global carbon cycle model driven by a time-dependent input signal which represents fossil fuel additions. We derive time-dependent age distributions for all compartments and calculate the time it takes to remove fossil carbon in a business-as-usual scenario.

  2. Are seismic waiting time distributions universal?

    NASA Astrophysics Data System (ADS)

    Davidsen, Jörn; Goltz, Christian

    2004-11-01

    We show that seismic waiting time distributions in California and Iceland have many features in common as, for example, a power-law decay with exponent α ~ 1.1 for intermediate and with exponent γ ~ 0.6 for short waiting times. While the transition point between these two regimes scales proportionally with the size of the considered area, the full distribution is not universal and depends in a non-trivial way on the geological area under consideration and its size. This is due to the spatial distribution of epicenters which does not form a simple mono-fractal. Yet, the dependence of the waiting time distributions on the threshold magnitude seems to be universal.

  3. Angular distribution of scission neutrons studied with time-dependent Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Wada, Takahiro; Asano, Tomomasa; Carjan, Nicolae

    2018-03-01

    We investigate the angular distribution of scission neutrons taking account of the effects of fission fragments. The time evolution of the wave function of the scission neutron is obtained by integrating the time-dependent Schrodinger equation numerically. The effects of the fission fragments are taken into account by means of the optical potentials. The angular distribution is strongly modified by the presence of the fragments. In the case of asymmetric fission, it is found that the heavy fragment has stronger effects. Dependence on the initial distribution and on the properties of fission fragments is discussed. We also discuss on the treatment of the boundary to avoid artificial reflections

  4. An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand

    ERIC Educational Resources Information Center

    Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.

    2005-01-01

    An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…

  5. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  6. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  7. Metastable Distributions of Markov Chains with Rare Transitions

    NASA Astrophysics Data System (ADS)

    Freidlin, M.; Koralov, L.

    2017-06-01

    In this paper we consider Markov chains X^\\varepsilon _t with transition rates that depend on a small parameter \\varepsilon . We are interested in the long time behavior of X^\\varepsilon _t at various \\varepsilon -dependent time scales t = t(\\varepsilon ). The asymptotic behavior depends on how the point (1/\\varepsilon , t(\\varepsilon )) approaches infinity. We introduce a general notion of complete asymptotic regularity (a certain asymptotic relation between the ratios of transition rates), which ensures the existence of the metastable distribution for each initial point and a given time scale t(\\varepsilon ). The technique of i-graphs allows one to describe the metastable distribution explicitly. The result may be viewed as a generalization of the ergodic theorem to the case of parameter-dependent Markov chains.

  8. The precise time-dependent solution of the Fokker–Planck equation with anomalous diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Ran; Du, Jiulin, E-mail: jiulindu@aliyun.com

    2015-08-15

    We study the time behavior of the Fokker–Planck equation in Zwanzig’s rule (the backward-Ito’s rule) based on the Langevin equation of Brownian motion with an anomalous diffusion in a complex medium. The diffusion coefficient is a function in momentum space and follows a generalized fluctuation–dissipation relation. We obtain the precise time-dependent analytical solution of the Fokker–Planck equation and at long time the solution approaches to a stationary power-law distribution in nonextensive statistics. As a test, numerically we have demonstrated the accuracy and validity of the time-dependent solution. - Highlights: • The precise time-dependent solution of the Fokker–Planck equation with anomalousmore » diffusion is found. • The anomalous diffusion satisfies a generalized fluctuation–dissipation relation. • At long time the time-dependent solution approaches to a power-law distribution in nonextensive statistics. • Numerically we have demonstrated the accuracy and validity of the time-dependent solution.« less

  9. Deformation dependence of proton decay rates and angular distributions in a time-dependent approach

    NASA Astrophysics Data System (ADS)

    Carjan, N.; Talou, P.; Strottman, D.

    1998-12-01

    A new, time-dependent, approach to proton decay from axially symmetric deformed nuclei is presented. The two-dimensional time-dependent Schrödinger equation for the interaction between the emitted proton and the rest of the nucleus is solved numerically for well defined initial quasi-stationary proton states. Applied to the hypothetical proton emission from excited states in deformed nuclei of 208Pb, this approach shows that the problem cannot be reduced to one dimension. There are in general more than one directions of emission with wide distributions around them, determined mainly by the quantum numbers of the initial wave function rather than by the potential landscape. The distribution of the "residual" angular momentum and its variation in time play a major role in the determination of the decay rate. In a couple of cases, no exponential decay was found during the calculated time evolution (2×10-21 sec) although more than half of the wave function escaped during that time.

  10. Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.

    ERIC Educational Resources Information Center

    Egghe, Leo; Rousseau, Ronald

    1995-01-01

    Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…

  11. Time dependent temperature distribution in pulsed Ti:sapphire lasers

    NASA Technical Reports Server (NTRS)

    Buoncristiani, A. Martin; Byvik, Charles E.; Farrukh, Usamah O.

    1988-01-01

    An expression is derived for the time dependent temperature distribution in a finite solid state laser rod for an end-pumped beam of arbitrary shape. The specific case of end pumping by circular (constant) or Gaussian beam is described. The temperature profile for a single pump pulse and for repetitive pulse operation is discussed. The particular case of the temperature distribution in a pulsed titanium:sapphire rod is considered.

  12. Anomalous transport in fluid field with random waiting time depending on the preceding jump length

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Li, Guo-Hua

    2016-11-01

    Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).

  13. Extreme Unconditional Dependence Vs. Multivariate GARCH Effect in the Analysis of Dependence Between High Losses on Polish and German Stock Indexes

    NASA Astrophysics Data System (ADS)

    Rokita, Pawel

    Classical portfolio diversification methods do not take account of any dependence between extreme returns (losses). Many researchers provide, however, some empirical evidence for various assets that extreme-losses co-occur. If the co-occurrence is frequent enough to be statistically significant, it may seriously influence portfolio risk. Such effects may result from a few different properties of financial time series, like for instance: (1) extreme dependence in a (long-term) unconditional distribution, (2) extreme dependence in subsequent conditional distributions, (3) time-varying conditional covariance, (4) time-varying (long-term) unconditional covariance, (5) market contagion. Moreover, a mix of these properties may be present in return time series. Modeling each of them requires different approaches. It seams reasonable to investigate whether distinguishing between the properties is highly significant for portfolio risk measurement. If it is, identifying the effect responsible for high loss co-occurrence would be of a great importance. If it is not, the best solution would be selecting the easiest-to-apply model. This article concentrates on two of the aforementioned properties: extreme dependence (in a long-term unconditional distribution) and time-varying conditional covariance.

  14. On absence of steady state in the Bouchaud-Mézard network model

    NASA Astrophysics Data System (ADS)

    Liu, Zhiyuan; Serota, R. A.

    2018-02-01

    In the limit of infinite number of nodes (agents), the Itô-reduced Bouchaud-Mézard network model of economic exchange has a time-independent mean and a steady-state inverse gamma distribution. We show that for a finite number of nodes the mean is actually distributed as a time-dependent lognormal and inverse gamma is quasi-stationary, with the time-dependent scale parameter.

  15. A statistical analysis of the daily streamflow hydrograph

    NASA Astrophysics Data System (ADS)

    Kavvas, M. L.; Delleur, J. W.

    1984-03-01

    In this study a periodic statistical analysis of daily streamflow data in Indiana, U.S.A., was performed to gain some new insight into the stochastic structure which describes the daily streamflow process. This analysis was performed by the periodic mean and covariance functions of the daily streamflows, by the time and peak discharge -dependent recession limb of the daily streamflow hydrograph, by the time and discharge exceedance level (DEL) -dependent probability distribution of the hydrograph peak interarrival time, and by the time-dependent probability distribution of the time to peak discharge. Some new statistical estimators were developed and used in this study. In general features, this study has shown that: (a) the persistence properties of daily flows depend on the storage state of the basin at the specified time origin of the flow process; (b) the daily streamflow process is time irreversible; (c) the probability distribution of the daily hydrograph peak interarrival time depends both on the occurrence time of the peak from which the inter-arrival time originates and on the discharge exceedance level; and (d) if the daily streamflow process is modeled as the release from a linear watershed storage, this release should depend on the state of the storage and on the time of the release as the persistence properties and the recession limb decay rates were observed to change with the state of the watershed storage and time. Therefore, a time-varying reservoir system needs to be considered if the daily streamflow process is to be modeled as the release from a linear watershed storage.

  16. Delay-distribution-dependent H∞ state estimation for delayed neural networks with (x,v)-dependent noises and fading channels.

    PubMed

    Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E

    2016-12-01

    This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.

    NASA Astrophysics Data System (ADS)

    Gavazza, Sergio

    Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.

  18. Stabilization and control of distributed systems with time-dependent spatial domains

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1990-01-01

    This paper considers the problem of the stabilization and control of distributed systems with time-dependent spatial domains. The evolution of the spatial domains with time is described by a finite-dimensional system of ordinary differential equations, while the distributed systems are described by first-order or second-order linear evolution equations defined on appropriate Hilbert spaces. First, results pertaining to the existence and uniqueness of solutions of the system equations are presented. Then, various optimal control and stabilization problems are considered. The paper concludes with some examples which illustrate the application of the main results.

  19. Time-dependent solutions for a stochastic model of gene expression with molecule production in the form of a compound Poisson process.

    PubMed

    Jędrak, Jakub; Ochab-Marcinek, Anna

    2016-09-01

    We study a stochastic model of gene expression, in which protein production has a form of random bursts whose size distribution is arbitrary, whereas protein decay is a first-order reaction. We find exact analytical expressions for the time evolution of the cumulant-generating function for the most general case when both the burst size probability distribution and the model parameters depend on time in an arbitrary (e.g., oscillatory) manner, and for arbitrary initial conditions. We show that in the case of periodic external activation and constant protein degradation rate, the response of the gene is analogous to the resistor-capacitor low-pass filter, where slow oscillations of the external driving have a greater effect on gene expression than the fast ones. We also demonstrate that the nth cumulant of the protein number distribution depends on the nth moment of the burst size distribution. We use these results to show that different measures of noise (coefficient of variation, Fano factor, fractional change of variance) may vary in time in a different manner. Therefore, any biological hypothesis of evolutionary optimization based on the nonmonotonic dependence of a chosen measure of noise on time must justify why it assumes that biological evolution quantifies noise in that particular way. Finally, we show that not only for exponentially distributed burst sizes but also for a wider class of burst size distributions (e.g., Dirac delta and gamma) the control of gene expression level by burst frequency modulation gives rise to proportional scaling of variance of the protein number distribution to its mean, whereas the control by amplitude modulation implies proportionality of protein number variance to the mean squared.

  20. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy.

    PubMed

    Shizgal, Bernie D

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].

  1. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].

  2. Linking age, survival, and transit time distributions

    NASA Astrophysics Data System (ADS)

    Calabrese, Salvatore; Porporato, Amilcare

    2015-10-01

    Although the concepts of age, survival, and transit time have been widely used in many fields, including population dynamics, chemical engineering, and hydrology, a comprehensive mathematical framework is still missing. Here we discuss several relationships among these quantities by starting from the evolution equation for the joint distribution of age and survival, from which the equations for age and survival time readily follow. It also becomes apparent how the statistical dependence between age and survival is directly related to either the age dependence of the loss function or the survival-time dependence of the input function. The solution of the joint distribution equation also allows us to obtain the relationships between the age at exit (or death) and the survival time at input (or birth), as well as to stress the symmetries of the various distributions under time reversal. The transit time is then obtained as a sum of the age and survival time, and its properties are discussed along with the general relationships between their mean values. The special case of steady state case is analyzed in detail. Some examples, inspired by hydrologic applications, are presented to illustrate the theory with the specific results. This article was corrected on 11 Nov 2015. See the end of the full text for details.

  3. Statistical electric field and switching time distributions in PZT 1Nb2Sr ceramics: Crystal- and microstructure effects

    NASA Astrophysics Data System (ADS)

    Zhukov, Sergey; Kungl, Hans; Genenko, Yuri A.; von Seggern, Heinz

    2014-01-01

    Dispersive polarization response of ferroelectric PZT ceramics is analyzed assuming the inhomogeneous field mechanism of polarization switching. In terms of this model, the local polarization switching proceeds according to the Kolmogorov-Avrami-Ishibashi scenario with the switching time determined by the local electric field. As a result, the total polarization reversal is dominated by the statistical distribution of the local field magnitudes. Microscopic parameters of this model (the high-field switching time and the activation field) as well as the statistical field and consequent switching time distributions due to disorder at a mesoscopic scale can be directly determined from a set of experiments measuring the time dependence of the total polarization switching, when applying electric fields of different magnitudes. PZT 1Nb2Sr ceramics with Zr/Ti ratios 51.5/48.5, 52.25/47.75, and 60/40 with four different grain sizes each were analyzed following this approach. Pronounced differences of field and switching time distributions were found depending on the Zr/Ti ratios. Varying grain size also affects polarization reversal parameters, but in another way. The field distributions remain almost constant with grain size whereas switching times and activation field tend to decrease with increasing grain size. The quantitative changes of the latter parameters with grain size are very different depending on composition. The origin of the effects on the field and switching time distributions are related to differences in structural and microstructural characteristics of the materials and are discussed with respect to the hysteresis loops observed under bipolar electrical cycling.

  4. Energy-dependent angular shifts in the photoelectron momentum distribution for atoms in elliptically polarized laser pulses

    NASA Astrophysics Data System (ADS)

    Xie, Hui; Li, Min; Luo, Siqiang; Li, Yang; Zhou, Yueming; Cao, Wei; Lu, Peixiang

    2017-12-01

    We measure the photoelectron momentum distributions from atoms ionized by strong elliptically polarized laser fields at the wavelengths of 400 and 800 nm, respectively. The momentum distributions show distinct angular shifts, which sensitively depend on the electron energy. We find that the deflection angle with respect to the major axis of the laser ellipse decreases with the increase of the electron energy for large ellipticities. This energy-dependent angular shift is well reproduced by both numerical solutions of the time-dependent Schrödinger equation and the classical-trajectory Monte Carlo model. We show that the ionization time delays among the electrons with different energies are responsible for the energy-dependent angular shifts. On the other hand, for small ellipticities, we find the deflection angle increases with increasing the electron energy, which might be caused by electron rescattering in the elliptically polarized fields.

  5. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy

    2018-03-01

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.

  6. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE PAGES

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...

    2018-03-06

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  7. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  8. Sloppy-slotted ALOHA

    NASA Technical Reports Server (NTRS)

    Crozier, Stewart N.

    1990-01-01

    Random access signaling, which allows slotted packets to spill over into adjacent slots, is investigated. It is shown that sloppy-slotted ALOHA can always provide higher throughput than conventional slotted ALOHA. The degree of improvement depends on the timing error distribution. Throughput performance is presented for Gaussian timing error distributions, modified to include timing error corrections. A general channel capacity lower bound, independent of the specific timing error distribution, is also presented.

  9. Regression analysis using dependent Polya trees.

    PubMed

    Schörgendorfer, Angela; Branscum, Adam J

    2013-11-30

    Many commonly used models for linear regression analysis force overly simplistic shape and scale constraints on the residual structure of data. We propose a semiparametric Bayesian model for regression analysis that produces data-driven inference by using a new type of dependent Polya tree prior to model arbitrary residual distributions that are allowed to evolve across increasing levels of an ordinal covariate (e.g., time, in repeated measurement studies). By modeling residual distributions at consecutive covariate levels or time points using separate, but dependent Polya tree priors, distributional information is pooled while allowing for broad pliability to accommodate many types of changing residual distributions. We can use the proposed dependent residual structure in a wide range of regression settings, including fixed-effects and mixed-effects linear and nonlinear models for cross-sectional, prospective, and repeated measurement data. A simulation study illustrates the flexibility of our novel semiparametric regression model to accurately capture evolving residual distributions. In an application to immune development data on immunoglobulin G antibodies in children, our new model outperforms several contemporary semiparametric regression models based on a predictive model selection criterion. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  11. Time-dependent shock acceleration of particles. Effect of the time-dependent injection, with application to supernova remnants

    NASA Astrophysics Data System (ADS)

    Petruk, O.; Kopytko, B.

    2016-11-01

    Three approaches are considered to solve the equation which describes the time-dependent diffusive shock acceleration of test particles at the non-relativistic shocks. At first, the solution of Drury for the particle distribution function at the shock is generalized to any relation between the acceleration time-scales upstream and downstream and for the time-dependent injection efficiency. Three alternative solutions for the spatial dependence of the distribution function are derived. Then, the two other approaches to solve the time-dependent equation are presented, one of which does not require the Laplace transform. At the end, our more general solution is discussed, with a particular attention to the time-dependent injection in supernova remnants. It is shown that, comparing to the case with the dominant upstream acceleration time-scale, the maximum momentum of accelerated particles shifts towards the smaller momenta with increase of the downstream acceleration time-scale. The time-dependent injection affects the shape of the particle spectrum. In particular, (I) the power-law index is not solely determined by the shock compression, in contrast to the stationary solution; (II) the larger the injection efficiency during the first decades after the supernova explosion, the harder the particle spectrum around the high-energy cutoff at the later times. This is important, in particular, for interpretation of the radio and gamma-ray observations of supernova remnants, as demonstrated on a number of examples.

  12. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less

  13. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    PubMed

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis

    PubMed Central

    Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.

    2015-01-01

    Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324

  15. Time-dependent quantum oscillator as attenuator and amplifier: noise and statistical evolutions

    NASA Astrophysics Data System (ADS)

    Portes, D.; Rodrigues, H.; Duarte, S. B.; Baseia, B.

    2004-10-01

    We revisit the quantum oscillator, modelled as a time-dependent LC-circuit. Nonclassical properties concerned with attenuation and amplification regions are considered, as well as time evolution of quantum noise and statistics, with emphasis on revivals of the statistical distribution.

  16. A Study of Transport in the Near-Earth Plasma Sheet During A Substorm Using Time-Dependent Large Scale Kinetics

    NASA Technical Reports Server (NTRS)

    El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Frank, L. A.; Paterson, W. R.

    1998-01-01

    In this study we investigate the transport of H+ ions that made up the complex ion distribution function observed by the Geotail spacecraft at 0740 UT on November 24, 1996. This ion distribution function, observed by Geotail at approximately 20 R(sub E) downtail, was used to initialize a time-dependent large-scale kinetic (LSK) calculation of the trajectories of 75,000 ions forward in time. Time-dependent magnetic and electric fields were obtained from a global magnetohydrodynamic (MHD) simulation of the magnetosphere and its interaction with the solar wind and the interplanetary magnetic field (IMF) as observed during the interval of the observation of the distribution function. Our calculations indicate that the particles observed by Geotail were scattered across the equatorial plane by the multiple interactions with the current sheet and then convected sunward. They were energized by the dawn-dusk electric field during their transport from Geotail location and ultimately were lost at the ionospheric boundary or into the magnetopause.

  17. Priority queues with bursty arrivals of incoming tasks

    NASA Astrophysics Data System (ADS)

    Masuda, N.; Kim, J. S.; Kahng, B.

    2009-03-01

    Recently increased accessibility of large-scale digital records enables one to monitor human activities such as the interevent time distributions between two consecutive visits to a web portal by a single user, two consecutive emails sent out by a user, two consecutive library loans made by a single individual, etc. Interestingly, those distributions exhibit a universal behavior, D(τ)˜τ-δ , where τ is the interevent time, and δ≃1 or 3/2 . The universal behaviors have been modeled via the waiting-time distribution of a task in the queue operating based on priority; the waiting time follows a power-law distribution Pw(τ)˜τ-α with either α=1 or 3/2 depending on the detail of queuing dynamics. In these models, the number of incoming tasks in a unit time interval has been assumed to follow a Poisson-type distribution. For an email system, however, the number of emails delivered to a mail box in a unit time we measured follows a power-law distribution with general exponent γ . For this case, we obtain analytically the exponent α , which is not necessarily 1 or 3/2 and takes nonuniversal values depending on γ . We develop the generating function formalism to obtain the exponent α , which is distinct from the continuous time approximation used in the previous studies.

  18. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  19. Distributional behavior of diffusion coefficients obtained by single trajectories in annealed transit time model

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-12-01

    Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.

  20. Generalized time evolution of the homogeneous cooling state of a granular gas with positive and negative coefficient of normal restitution

    NASA Astrophysics Data System (ADS)

    Khalil, Nagi

    2018-04-01

    The homogeneous cooling state (HCS) of a granular gas described by the inelastic Boltzmann equation is reconsidered. As usual, particles are taken as inelastic hard disks or spheres, but now the coefficient of normal restitution α is allowed to take negative values , which is a simple way of modeling more complicated inelastic interactions. The distribution function of the HCS is studied at the long-time limit, as well as intermediate times. At the long-time limit, the relevant information of the HCS is given by a scaling distribution function , where the time dependence occurs through a dimensionless velocity c. For , remains close to the Gaussian distribution in the thermal region, its cumulants and exponential tails being well described by the first Sonine approximation. In contrast, for , the distribution function becomes multimodal, its maxima located at , and its observable tails algebraic. The latter is a consequence of an unbalanced relaxation–dissipation competition, and is analytically demonstrated for , thanks to a reduction of the Boltzmann equation to a Fokker–Plank-like equation. Finally, a generalized scaling solution to the Boltzmann equation is also found . Apart from the time dependence occurring through the dimensionless velocity, depends on time through a new parameter β measuring the departure of the HCS from its long-time limit. It is shown that describes the time evolution of the HCS for almost all times. The relevance of the new scaling is also discussed.

  1. The time scale of quasifission process in reactions with heavy ions

    NASA Astrophysics Data System (ADS)

    Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.

    2014-05-01

    The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. These data have been measured by two time-of-flight CORSET spectrometer. The mass resolution of the spectrometer for these measurements was about 3u. It allows to investigate the features of mass distributions with good accuracy. The properties of mass and TKE of QF fragments in dependence on interaction energy have been investigated and compared with characteristics of the fusion-fission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.

  2. Versatile time-dependent spatial distribution model of sun glint for satellite-based ocean imaging

    NASA Astrophysics Data System (ADS)

    Zhou, Guanhua; Xu, Wujian; Niu, Chunyue; Zhang, Kai; Ma, Zhongqi; Wang, Jiwen; Zhang, Yue

    2017-01-01

    We propose a versatile model to describe the time-dependent spatial distribution of sun glint areas in satellite-based wave water imaging. This model can be used to identify whether the imaging is affected by sun glint and how strong the glint is. The observing geometry is calculated using an accurate orbit prediction method. The Cox-Munk model is used to analyze the bidirectional reflectance of wave water surface under various conditions. The effects of whitecaps and the reflectance emerging from the sea water have been considered. Using the moderate resolution atmospheric transmission radiative transfer model, we are able to effectively calculate the sun glint distribution at the top of the atmosphere. By comparing the modeled data with the medium resolution imaging spectrometer image and Feng Yun 2E (FY-2E) image, we have proven that the time-dependent spatial distribution of sun glint areas can be effectively predicted. In addition, the main factors in determining sun glint distribution and the temporal variation rules of sun glint have been discussed. Our model can be used to design satellite orbits and should also be valuable in either eliminating sun glint or making use of it.

  3. Development of activity pencil beam algorithm using measured distribution data of positron emitter nuclei generated by proton irradiation of targets containing (12)C, (16)O, and (40)Ca nuclei in preparation of clinical application.

    PubMed

    Miyatake, Aya; Nishio, Teiji; Ogino, Takashi

    2011-10-01

    The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.

  4. Provably secure time distribution for the electric grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith IV, Amos M; Evans, Philip G; Williams, Brian P

    We demonstrate a quantum time distribution (QTD) method that combines the precision of optical timing techniques with the integrity of quantum key distribution (QKD). Critical infrastructure is dependent on microprocessor- and programmable logic-based monitoring and control systems. The distribution of timing information across the electric grid is accomplished by GPS signals which are known to be vulnerable to spoofing. We demonstrate a method for synchronizing remote clocks based on the arrival time of photons in a modifed QKD system. This has the advantage that the signal can be veried by examining the quantum states of the photons similar to QKD.

  5. A program for performing exact quantum dynamics calculations using cylindrical polar coordinates: A nanotube application

    NASA Astrophysics Data System (ADS)

    Skouteris, Dimitris; Gervasi, Osvaldo; Laganà, Antonio

    2009-03-01

    A program that uses the time-dependent wavepacket method to study the motion of structureless particles in a force field of quasi-cylindrical symmetry is presented here. The program utilises cylindrical polar coordinates to express the wavepacket, which is subsequently propagated using a Chebyshev expansion of the Schrödinger propagator. Time-dependent exit flux as well as energy-dependent S matrix elements can be obtained for all states of the particle (describing its angular momentum component along the nanotube axis and the excitation of the radial degree of freedom in the cylinder). The program has been used to study the motion of an H atom across a carbon nanotube. Program summaryProgram title: CYLWAVE Catalogue identifier: AECL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3673 No. of bytes in distributed program, including test data, etc.: 35 237 Distribution format: tar.gz Programming language: Fortran 77 Computer: RISC workstations Operating system: UNIX RAM: 120 MBytes Classification: 16.7, 16.10 External routines: SUNSOFT performance library (not essential) TFFT2D.F (Temperton Fast Fourier Transform), BESSJ.F (from Numerical Recipes, for the calculation of Bessel functions) (included in the distribution file). Nature of problem: Time evolution of the state of a structureless particle in a quasicylindrical potential. Solution method: Time dependent wavepacket propagation. Running time: 50000 secs. The test run supplied with the distribution takes about 10 minutes to complete.

  6. The distribution of first-passage times and durations in FOREX and future markets

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico

    2009-07-01

    Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting time. We find that our distribution is applicable as long as durations follow a Weibull law for short times and do not have too heavy a tail.

  7. Hydration-Dependent Dynamical Modes in Xyloglucan from Molecular Dynamics Simulation of 13C NMR Relaxation Times and Their Distributions.

    PubMed

    Chen, Pan; Terenzi, Camilla; Furó, István; Berglund, Lars A; Wohlert, Jakob

    2018-05-15

    Macromolecular dynamics in biological systems, which play a crucial role for biomolecular function and activity at ambient temperature, depend strongly on moisture content. Yet, a generally accepted quantitative model of hydration-dependent phenomena based on local relaxation and diffusive dynamics of both polymer and its adsorbed water is still missing. In this work, atomistic-scale spatial distributions of motional modes are calculated using molecular dynamics simulations of hydrated xyloglucan (XG). These are shown to reproduce experimental hydration-dependent 13 C NMR longitudinal relaxation times ( T 1 ) at room temperature, and relevant features of their broad distributions, which are indicative of locally heterogeneous polymer reorientational dynamics. At low hydration, the self-diffusion behavior of water shows that water molecules are confined to particular locations in the randomly aggregated XG network while the average polymer segmental mobility remains low. Upon increasing water content, the hydration network becomes mobile and fully accessible for individual water molecules, and the motion of hydrated XG segments becomes faster. Yet, the polymer network retains a heterogeneous gel-like structure even at the highest level of hydration. We show that the observed distribution of relaxations times arises from the spatial heterogeneity of chain mobility that in turn is a result of heterogeneous distribution of water-chain and chain-chain interactions. Our findings contribute to the picture of hydration-dependent dynamics in other macromolecules such as proteins, DNA, and synthetic polymers, and hold important implications for the mechanical properties of polysaccharide matrixes in plants and plant-based materials.

  8. Time-Frequency Distribution Analyses of Ku-Band Radar Doppler Echo Signals

    NASA Astrophysics Data System (ADS)

    Bujaković, Dimitrije; Andrić, Milenko; Bondžulić, Boban; Mitrović, Srđan; Simić, Slobodan

    2015-03-01

    Real radar echo signals of a pedestrian, vehicle and group of helicopters are analyzed in order to maximize signal energy around central Doppler frequency in time-frequency plane. An optimization, preserving this concentration, is suggested based on three well-known concentration measures. Various window functions and time-frequency distributions were optimization inputs. Conducted experiments on an analytic and three real signals have shown that energy concentration significantly depends on used time-frequency distribution and window function, for all three used criteria.

  9. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  10. Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula.

    PubMed

    Shi, Wei; Xia, Jun

    2017-02-01

    Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.

  11. Total spectral distributions from Hawking radiation

    NASA Astrophysics Data System (ADS)

    Broda, Bogusław

    2017-11-01

    Taking into account the time dependence of the Hawking temperature and finite evaporation time of the black hole, the total spectral distributions of the radiant energy and of the number of particles have been explicitly calculated and compared to their temporary (initial) blackbody counterparts (spectral exitances).

  12. Time-resolved measurements of the angular distribution of lasing at 23.6 nm in Ne-like germanium

    NASA Astrophysics Data System (ADS)

    Kodama, R.; Neely, D.; Dwivedi, L.; Key, M. H.; Krishnan, J.; Lewis, C. L. S.; O'Neill, D.; Norreys, P.; Pert, G. J.; Ramsden, S. A.; Tallents, G. J.; Uhomoibhi, J.; Zhang, J.

    1992-06-01

    The time dependence of the angular distribution of soft X-ray lasing at 23.6 nm in Ne-like germanium has been measured using a streak camera. Slabs of germanium have been irradiated over ≈ 22 mm length × 100 μm width with three line focussed beams of the SERC Rutherford Appleton Laboratory VULCAN laser at 1.06 μm wavelength. The laser beam sweeps in time towards the target surface plane and the divergence broadens with time. The change of the peak intensity pointing and the broadening of the profile with time are consistent with expectations of the time dependence of refraction and divergence due to density gradients in the plasma.

  13. KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.

    1962-06-01

    A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)

  14. The availability of Landsat data: Past, present, and future

    USGS Publications Warehouse

    Draeger, W.C.; Holm, T.M.; Lauer, D.T.; Thompson, R.J.

    1997-01-01

    It has long been recognized that the success of the Landsat program would depend on an effective distribution of its data to a wide variety of users, worldwide, in a timely manner. Since 1972, nearly $250 million worth of data have been distributed by a network of ground stations around the world. The policies of the U.S. Government affecting the distribution, availability, and pricing of Landsat data have been controversial, and have been strongly affected by the attempts to commercialize the program. At the present time, data are being distributed in the U.S. by either government or commercial entities, depending on the date of acquisition of the data in question and whether or not the customer is affiliated with the Federal Government. Although the future distribution of Landsat data is currently under discussion, it seems likely that data distribution initially will be the responsibility of NOAA. In any case, the long-term archive and distribution of all Landsat data will be the responsibility of the Department of Interior's U.S. Geological Survey.

  15. Reaction Event Counting Statistics of Biopolymer Reaction Systems with Dynamic Heterogeneity.

    PubMed

    Lim, Yu Rim; Park, Seong Jun; Park, Bo Jung; Cao, Jianshu; Silbey, Robert J; Sung, Jaeyoung

    2012-04-10

    We investigate the reaction event counting statistics (RECS) of an elementary biopolymer reaction in which the rate coefficient is dependent on states of the biopolymer and the surrounding environment and discover a universal kinetic phase transition in the RECS of the reaction system with dynamic heterogeneity. From an exact analysis for a general model of elementary biopolymer reactions, we find that the variance in the number of reaction events is dependent on the square of the mean number of the reaction events when the size of measurement time is small on the relaxation time scale of rate coefficient fluctuations, which does not conform to renewal statistics. On the other hand, when the size of the measurement time interval is much greater than the relaxation time of rate coefficient fluctuations, the variance becomes linearly proportional to the mean reaction number in accordance with renewal statistics. Gillespie's stochastic simulation method is generalized for the reaction system with a rate coefficient fluctuation. The simulation results confirm the correctness of the analytic results for the time dependent mean and variance of the reaction event number distribution. On the basis of the obtained results, we propose a method of quantitative analysis for the reaction event counting statistics of reaction systems with rate coefficient fluctuations, which enables one to extract information about the magnitude and the relaxation times of the fluctuating reaction rate coefficient, without a bias that can be introduced by assuming a particular kinetic model of conformational dynamics and the conformation dependent reactivity. An exact relationship is established between a higher moment of the reaction event number distribution and the multitime correlation of the reaction rate for the reaction system with a nonequilibrium initial state distribution as well as for the system with the equilibrium initial state distribution.

  16. Time-evolution of grain size distributions in random nucleation and growth crystallization processes

    NASA Astrophysics Data System (ADS)

    Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.

    2010-02-01

    We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.

  17. Diffusion of active chiral particles

    NASA Astrophysics Data System (ADS)

    Sevilla, Francisco J.

    2016-12-01

    The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.

  18. Directed networks' different link formation mechanisms causing degree distribution distinction

    NASA Astrophysics Data System (ADS)

    Behfar, Stefan Kambiz; Turkina, Ekaterina; Cohendet, Patrick; Burger-Helmchen, Thierry

    2016-11-01

    Within undirected networks, scientists have shown much interest in presenting power-law features. For instance, Barabási and Albert (1999) claimed that a common property of many large networks is that vertex connectivity follows scale-free power-law distribution, and in another study Barabási et al. (2002) showed power law evolution in the social network of scientific collaboration. At the same time, Jiang et al. (2011) discussed deviation from power-law distribution; others indicated that size effect (Bagrow et al., 2008), information filtering mechanism (Mossa et al., 2002), and birth and death process (Shi et al., 2005) could account for this deviation. Within directed networks, many authors have considered that outlinks follow a similar mechanism of creation as inlinks' (Faloutsos et al., 1999; Krapivsky et al., 2001; Tanimoto, 2009) with link creation rate being the linear function of node degree, resulting in a power-law shape for both indegree and outdegree distribution. Some other authors have made an assumption that directed networks, such as scientific collaboration or citation, behave as undirected, resulting in a power-law degree distribution accordingly (Barabási et al., 2002). At the same time, we claim (1) Outlinks feature different degree distributions than inlinks; where different link formation mechanisms cause the distribution distinctions, (2) in/outdegree distribution distinction holds for different levels of system decomposition; therefore this distribution distinction is a property of directed networks. First, we emphasize in/outlink formation mechanisms as causal factors for distinction between indegree and outdegree distributions (where this distinction has already been noticed in Barker et al. (2010) and Baxter et al. (2006)) within a sample network of OSS projects as well as Java software corpus as a network. Second, we analyze whether this distribution distinction holds for different levels of system decomposition: open-source-software (OSS) project-project dependency within a cluster, package-package dependency within a project and class-class dependency within a package. We conclude that indegree and outdegree dependencies do not lead to similar type of degree distributions, implying that indegree dependencies follow overall power-law distribution (or power-law with flat-top or exponential cut-off in some cases), while outdegree dependencies do not follow heavy-tailed distribution.

  19. Synchrony detection and amplification by silicon neurons with STDP synapses.

    PubMed

    Bofill-i-petit, Adria; Murray, Alan F

    2004-09-01

    Spike-timing dependent synaptic plasticity (STDP) is a form of plasticity driven by precise spike-timing differences between presynaptic and postsynaptic spikes. Thus, the learning rules underlying STDP are suitable for learning neuronal temporal phenomena such as spike-timing synchrony. It is well known that weight-independent STDP creates unstable learning processes resulting in balanced bimodal weight distributions. In this paper, we present a neuromorphic analog very large scale integration (VLSI) circuit that contains a feedforward network of silicon neurons with STDP synapses. The learning rule implemented can be tuned to have a moderate level of weight dependence. This helps stabilise the learning process and still generates binary weight distributions. From on-chip learning experiments we show that the chip can detect and amplify hierarchical spike-timing synchrony structures embedded in noisy spike trains. The weight distributions of the network emerging from learning are bimodal.

  20. Distribution of Curcumin and THC in Peripheral Blood Mononuclear Cells Isolated from Healthy Individuals and Patients with Chronic Lymphocytic Leukemia.

    PubMed

    Bolger, Gordon T; Licollari, Albert; Tan, Aimin; Greil, Richard; Pleyer, Lisa; Vcelar, Brigitta; Majeed, Muhammad; Sordillo, Peter

    2018-01-01

    Background/Aim: Curcumin is being widely investigated for its anticancer properties and studies in the literature suggest that curcumin distributes to a higher degree in tumor versus non-tumor cells. In the current study, we report on investigation of the distribution of curcumin and metabolism to THC in PBMC from healthy individuals and chronic lymphocytic leukemia (CLL) patients following exposure to Lipocurc™ (liposomal curcumin). Materials and Methods: The time and temperature-dependent distribution of liposomal curcumin and metabolism to tetrahydrocurcumin (THC) were measured in vitro in human peripheral blood mononuclear cells (PBMC) obtained from healthy individuals, PBMC HI (cryopreserved and freshly isolated PBMC) and CLL patients (cryopreserved PBMC) with lymphocyte counts ranging from 17-58×10 6 cells/ml (PBMC CLL,Grp 1 ) and >150×10 6 cells/ml (PBMC CLL,Grp 2 ). PBMC were incubated in plasma protein supplemented media with Lipocurc™ for 2-16 min at 37°C and 4°C and the cell and medium levels of curcumin determined by LC-MS/MS. Results: PBMC from CLL patients displayed a 2.2-2.6-fold higher distribution of curcumin compared to PBMC HI Curcumin distribution into PBMCCLL, Grp 1/Grp 2 ranged from 384.75 - 574.50 ng/g w.w. of cell pellet and was greater compared to PBMC HI that ranged from 122.27-220.59 ng/g w.w. of cell pellet following incubation for up to 15-16 min at 37°C. The distribution of curcumin into PBMC CLL,Grp 2 was time-dependent in comparison to PBMC HI which did not display a time-dependence and there was no temperature-dependence for curcumin distribution in either cell type. Curcumin was metabolized to THC in PBMC. The metabolism of curcumin to THC was not markedly different between PBMC HI (range=23.94-42.04 ng/g w.w. cell pellet) and PBMC CLL,Grp 1/Grp 2 (range=23.08-48.22 ng/g. w.w. cell pellet). However, a significantly greater time and temperature-dependence was noted for THC in PBMC CLL,Grp 2 compared to PBMC HI Conclusion: Curcumin distribution into PBMC from CLL patients was higher compared to PBMC from healthy individuals, while metabolism to THC was similar. The potential for a greater distribution of curcumin into PBMC from CLL patients may be of therapeutic benefit. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  1. Manual choice reaction times in the rate-domain

    PubMed Central

    Harris, Christopher M.; Waddington, Jonathan; Biscione, Valerio; Manzi, Sean

    2014-01-01

    Over the last 150 years, human manual reaction times (RTs) have been recorded countless times. Yet, our understanding of them remains remarkably poor. RTs are highly variable with positively skewed frequency distributions, often modeled as an inverse Gaussian distribution reflecting a stochastic rise to threshold (diffusion process). However, latency distributions of saccades are very close to the reciprocal Normal, suggesting that “rate” (reciprocal RT) may be the more fundamental variable. We explored whether this phenomenon extends to choice manual RTs. We recorded two-alternative choice RTs from 24 subjects, each with 4 blocks of 200 trials with two task difficulties (easy vs. difficult discrimination) and two instruction sets (urgent vs. accurate). We found that rate distributions were, indeed, very close to Normal, shifting to lower rates with increasing difficulty and accuracy, and for some blocks they appeared to become left-truncated, but still close to Normal. Using autoregressive techniques, we found temporal sequential dependencies for lags of at least 3. We identified a transient and steady-state component in each block. Because rates were Normal, we were able to estimate autoregressive weights using the Box-Jenkins technique, and convert to a moving average model using z-transforms to show explicit dependence on stimulus input. We also found a spatial sequential dependence for the previous 3 lags depending on whether the laterality of previous trials was repeated or alternated. This was partially dissociated from temporal dependency as it only occurred in the easy tasks. We conclude that 2-alternative choice manual RT distributions are close to reciprocal Normal and not the inverse Gaussian. This is not consistent with stochastic rise to threshold models, and we propose a simple optimality model in which reward is maximized to yield to an optimal rate, and hence an optimal time to respond. We discuss how it might be implemented. PMID:24959134

  2. A model-free characterization of recurrences in stationary time series

    NASA Astrophysics Data System (ADS)

    Chicheportiche, Rémy; Chakraborti, Anirban

    2017-05-01

    Study of recurrences in earthquakes, climate, financial time-series, etc. is crucial to better forecast disasters and limit their consequences. Most of the previous phenomenological studies of recurrences have involved only a long-ranged autocorrelation function, and ignored the multi-scaling properties induced by potential higher order dependencies. We argue that copulas is a natural model-free framework to study non-linear dependencies in time series and related concepts like recurrences. Consequently, we arrive at the facts that (i) non-linear dependences do impact both the statistics and dynamics of recurrence times, and (ii) the scaling arguments for the unconditional distribution may not be applicable. Hence, fitting and/or simulating the intertemporal distribution of recurrence intervals is very much system specific, and cannot actually benefit from universal features, in contrast to the previous claims. This has important implications in epilepsy prognosis and financial risk management applications.

  3. Idealized models of the joint probability distribution of wind speeds

    NASA Astrophysics Data System (ADS)

    Monahan, Adam H.

    2018-05-01

    The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.

  4. Deviation from the Forster theory for time-dependent donor decays for randomly distributed molecules in solution

    NASA Astrophysics Data System (ADS)

    Lakowicz, Joseph R.; Szmacinski, Henryk; Johnson, Michael L.

    1990-05-01

    We examined the time -dependent donor decays of 2 - amino purine (2 -APU) , in the presence of increasing amounts of acceptor 2-aminobenzophenine (2-ABP). As the concentration of 2-ABP increases, the frequency-responses diverge from that predicted by Forster. The data were found to be consistent with modified Forster equations, but at this time we do not state that these modified expressions provide a correct molecular description of this donor-acceptor system. To the best of our knowledge this is the first paper which reports a failure of the Forster theory for randomly distributed donors and acceptors.

  5. LETTER TO THE EDITOR: Exact energy distribution function in a time-dependent harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Robnik, Marko; Romanovski, Valery G.; Stöckmann, Hans-Jürgen

    2006-09-01

    Following a recent work by Robnik and Romanovski (2006 J. Phys. A: Math. Gen. 39 L35, 2006 Open Syst. Inf. Dyn. 13 197-222), we derive an explicit formula for the universal distribution function of the final energies in a time-dependent 1D harmonic oscillator, whose functional form does not depend on the details of the frequency ω(t) and is closely related to the conservation of the adiabatic invariant. The normalized distribution function is P(x) = \\pi^{-1} (2\\mu^2 - x^2)^{-\\frac{1}{2}} , where x=E_1- \\skew3\\bar{E}_1 ; E1 is the final energy, \\skew3\\bar{E}_1 is its average value and µ2 is the variance of E1. \\skew3\\bar{E}_1 and µ2 can be calculated exactly using the WKB approach to all orders.

  6. Application of a time-dependent coalescence process for inferring the history of population size changes from DNA sequence data.

    PubMed

    Polanski, A; Kimmel, M; Chakraborty, R

    1998-05-12

    Distribution of pairwise differences of nucleotides from data on a sample of DNA sequences from a given segment of the genome has been used in the past to draw inferences about the past history of population size changes. However, all earlier methods assume a given model of population size changes (such as sudden expansion), parameters of which (e.g., time and amplitude of expansion) are fitted to the observed distributions of nucleotide differences among pairwise comparisons of all DNA sequences in the sample. Our theory indicates that for any time-dependent population size, N(tau) (in which time tau is counted backward from present), a time-dependent coalescence process yields the distribution, p(tau), of the time of coalescence between two DNA sequences randomly drawn from the population. Prediction of p(tau) and N(tau) requires the use of a reverse Laplace transform known to be unstable. Nevertheless, simulated data obtained from three models of monotone population change (stepwise, exponential, and logistic) indicate that the pattern of a past population size change leaves its signature on the pattern of DNA polymorphism. Application of the theory to the published mtDNA sequences indicates that the current mtDNA sequence variation is not inconsistent with a logistic growth of the human population.

  7. Parallel Computation and Visualization of Three-dimensional, Time-dependent, Thermal Convective Flows

    NASA Technical Reports Server (NTRS)

    Wang, P.; Li, P.

    1998-01-01

    A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.

  8. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  9. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE PAGES

    Radyushkin, Anatoly V.

    2017-08-28

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  10. Distributed Humidity Sensing in PMMA Optical Fibers at 500 nm and 650 nm Wavelengths.

    PubMed

    Liehr, Sascha; Breithaupt, Mathias; Krebber, Katerina

    2017-03-31

    Distributed measurement of humidity is a sought-after capability for various fields of application, especially in the civil engineering and structural health monitoring sectors. This article presents a method for distributed humidity sensing along polymethyl methacrylate (PMMA) polymer optical fibers (POFs) by analyzing wavelength-dependent Rayleigh backscattering and attenuation characteristics at 500 nm and 650 nm wavelengths. Spatially resolved humidity sensing is obtained from backscatter traces of a dual-wavelength optical time domain reflectometer (OTDR). Backscatter dependence, attenuation dependence as well as the fiber length change are characterized as functions of relative humidity. Cross-sensitivity effects are discussed and quantified. The evaluation of the humidity-dependent backscatter effects at the two wavelength measurements allows for distributed and unambiguous measurement of relative humidity. The technique can be readily employed with low-cost standard polymer optical fibers and commercial OTDR devices.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  12. Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions

    DOE PAGES

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...

    2015-11-01

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  13. An energy-balance model with multiply-periodic and quasi-chaotic free oscillations. [for climate forecasting

    NASA Technical Reports Server (NTRS)

    Bhattacharya, K.; Ghil, M.

    1979-01-01

    A slightly modified version of the one-dimensional time-dependent energy-balance climate model of Ghil and Bhattacharya (1978) is presented. The albedo-temperature parameterization has been reformulated and the smoothing of the temperature distribution in the tropics has been eliminated. The model albedo depends on time-lagged temperature in order to account for finite growth and decay time of continental ice sheets. Two distinct regimes of oscillatory behavior which depend on the value of the albedo-temperature time lag are considered.

  14. A temporal-spatial postprocessing model for probabilistic run-off forecast. With a case study from Ulla-Førre with five catchments and ten lead times

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.

    2012-04-01

    This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.

  15. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  16. Heterogeneous network epidemics: real-time growth, variance and extinction of infection.

    PubMed

    Ball, Frank; House, Thomas

    2017-09-01

    Recent years have seen a large amount of interest in epidemics on networks as a way of representing the complex structure of contacts capable of spreading infections through the modern human population. The configuration model is a popular choice in theoretical studies since it combines the ability to specify the distribution of the number of contacts (degree) with analytical tractability. Here we consider the early real-time behaviour of the Markovian SIR epidemic model on a configuration model network using a multitype branching process. We find closed-form analytic expressions for the mean and variance of the number of infectious individuals as a function of time and the degree of the initially infected individual(s), and write down a system of differential equations for the probability of extinction by time t that are numerically fast compared to Monte Carlo simulation. We show that these quantities are all sensitive to the degree distribution-in particular we confirm that the mean prevalence of infection depends on the first two moments of the degree distribution and the variance in prevalence depends on the first three moments of the degree distribution. In contrast to most existing analytic approaches, the accuracy of these results does not depend on having a large number of infectious individuals, meaning that in the large population limit they would be asymptotically exact even for one initial infectious individual.

  17. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Dynamical initial-state model for relativistic heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Chun; Schenke, Bjorn

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less

  19. Dynamical initial-state model for relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Shen, Chun; Schenke, Björn

    2018-02-01

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy-ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a fluctuating time depending on sampled final rapidities. Energy is deposited in space time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directly from the initial-state model, including net-baryon rapidity distributions, two-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. We also present the implementation of the model with 3+1-dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial-state model at proper times greater than the initial time for the hydrodynamic simulation.

  20. Dynamical initial-state model for relativistic heavy-ion collisions

    DOE PAGES

    Shen, Chun; Schenke, Bjorn

    2018-02-15

    We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less

  1. Can a simple lumped parameter model simulate complex transit time distributions? Benchmarking experiments in a virtual watershed.

    NASA Astrophysics Data System (ADS)

    Wilusz, D. C.; Maxwell, R. M.; Buda, A. R.; Ball, W. P.; Harman, C. J.

    2016-12-01

    The catchment transit-time distribution (TTD) is the time-varying, probabilistic distribution of water travel times through a watershed. The TTD is increasingly recognized as a useful descriptor of a catchment's flow and transport processes. However, TTDs are temporally complex and cannot be observed directly at watershed scale. Estimates of TTDs depend on available environmental tracers (such as stable water isotopes) and an assumed model whose parameters can be inverted from tracer data. All tracers have limitations though, such as (typically) short periods of observation or non-conservative behavior. As a result, models that faithfully simulate tracer observations may nonetheless yield TTD estimates with significant errors at certain times and water ages, conditioned on the tracer data available and the model structure. Recent advances have shown that time-varying catchment TTDs can be parsimoniously modeled by the lumped parameter rank StorAge Selection (rSAS) model, in which an rSAS function relates the distribution of water ages in outflows to the composition of age-ranked water in storage. Like other TTD models, rSAS is calibrated and evaluated against environmental tracer data, and the relative influence of tracer-dependent and model-dependent error on its TTD estimates is poorly understood. The purpose of this study is to benchmark the ability of different rSAS formulations to simulate TTDs in a complex, synthetic watershed where the lumped model can be calibrated and directly compared to a virtually "true" TTD. This experimental design allows for isolation of model-dependent error from tracer-dependent error. The integrated hydrologic model ParFlow with SLIM-FAST particle tracking code is used to simulate the watershed and its true TTD. To add field intelligence, the ParFlow model is populated with over forty years of hydrometric and physiographic data from the WE-38 subwatershed of the USDA's Mahantango Creek experimental catchment in PA, USA. The results are intended to give practical insight into tradeoffs between rSAS model structure and skill, and define a new performance benchmark to which other transit time models can be compared.

  2. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  3. Multiscale statistics of trajectories with applications to fluid particles in turbulence and football players

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Kadoch, Benjamin; Bos, Wouter

    2017-11-01

    The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.

  4. Analysis of temporal decay of diffuse broadband sound fields in enclosures by decomposition in powers of an absorption parameter

    NASA Astrophysics Data System (ADS)

    Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben

    2005-09-01

    An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.

  5. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  6. 3D glasma initial state for relativistic heavy ion collisions

    DOE PAGES

    Schenke, Björn; Schlichting, Sören

    2016-10-13

    We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.

  7. Rotating field mass and velocity analyzer

    NASA Technical Reports Server (NTRS)

    Smith, Steven Joel (Inventor); Chutjian, Ara (Inventor)

    1998-01-01

    A rotating field mass and velocity analyzer having a cell with four walls, time dependent RF potentials that are applied to each wall, and a detector. The time dependent RF potentials create an RF field in the cell which effectively rotates within the cell. An ion beam is accelerated into the cell and the rotating RF field disperses the incident ion beam according to the mass-to-charge (m/e) ratio and velocity distribution present in the ion beam. The ions of the beam either collide with the ion detector or deflect away from the ion detector, depending on the m/e, RF amplitude, and RF frequency. The detector counts the incident ions to determine the m/e and velocity distribution in the ion beam.

  8. Equilibration in the time-dependent Hartree-Fock approach probed with the Wigner distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loebl, N.; Maruhn, J. A.; Reinhard, P.-G.

    2011-09-15

    By calculating the Wigner distribution function in the reaction plane, we are able to probe the phase-space behavior in the time-dependent Hartree-Fock scheme during a heavy-ion collision in a consistent framework. Various expectation values of operators are calculated by evaluating the corresponding integrals over the Wigner function. In this approach, it is straightforward to define and analyze quantities even locally. We compare the Wigner distribution function with the smoothed Husimi distribution function. Different reaction scenarios are presented by analyzing central and noncentral {sup 16}O +{sup 16}O and {sup 96}Zr +{sup 132}Sn collisions. Although we observe strong dissipation in the timemore » evolution of global observables, there is no evidence for complete equilibration in the local analysis of the Wigner function. Because the initial phase-space volumes of the fragments barely merge and mean values of the observables are conserved in fusion reactions over thousands of fm/c, we conclude that the time-dependent Hartree-Fock method provides a good description of the early stage of a heavy-ion collision but does not provide a mechanism to change the phase-space structure in a dramatic way necessary to obtain complete equilibration.« less

  9. Double ionization of neon in elliptically polarized femtosecond laser fields

    NASA Astrophysics Data System (ADS)

    Kang, HuiPeng; Henrichs, Kevin; Wang, YanLan; Hao, XiaoLei; Eckart, Sebastian; Kunitski, Maksim; Schöffler, Markus; Jahnke, Till; Liu, XiaoJun; Dörner, Reinhard

    2018-06-01

    We present a joint experimental and theoretical investigation of the correlated electron momentum spectra from strong-field double ionization of neon induced by elliptically polarized laser pulses. A significant asymmetry of the electron momentum distributions along the major polarization axis is reported. This asymmetry depends sensitively on the laser ellipticity. Using a three-dimensional semiclassical model, we attribute this asymmetry pattern to the ellipticity-dependent probability distributions of recollision time. Our work demonstrates that, by simply varying the ellipticity, the correlated electron emission can be two-dimensionally controlled and the recolliding electron trajectories can be steered on a subcycle time scale.

  10. Integrated Logistics Support Analysis of the International Space Station Alpha, Background and Summary of Mathematical Modeling and Failure Density Distributions Pertaining to Maintenance Time Dependent Parameters

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.

  11. The Evolution of the Seismic-Aseismic Transition During the Earthquake Cycle: Constraints from the Time-Dependent Depth Distribution of Aftershocks

    NASA Astrophysics Data System (ADS)

    Rolandone, F.; Bürgmann, R.; Nadeau, R.; Freed, A.

    2003-12-01

    We have demonstrated that in the aftermath of large earthquakes, the depth extent of aftershocks shows an immediate deepening from pre-earthquake levels, followed by a time-dependent postseismic shallowing. We use these seismic data to constrain the variation of the depth of the seismic-aseismic transition with time throughout the earthquake cycle. Most studies of the seismic-aseismic transition have focussed on the effect of temperature and/or lithology on the transition either from brittle faulting to viscous flow or from unstable to stable sliding. They have shown that the maximum depth of seismic activity is well correlated with the spatial variations of these two parameters. However, little has been done to examine how the maximum depth of seismogenic faulting varies locally, at the scale of a fault segment, during the course of the earthquake cycle. Geologic and laboratory observations indicate that the depth of the seismic-aseismic transition should vary with strain rate and thus change with time throughout the earthquake cycle. We quantify the time-dependent variations in the depth of seismicity on various strike-slip faults in California before and after large earthquakes. We specifically investigate (1) the deepening of the aftershocks relative to the background seismicity, (2) the time constant of the postseismic shallowing of the deepest earthquakes, and (3) the correlation of the time-dependent pattern with the coseismic slip distribution and the expected stress increase. Together with geodetic measurements, these seismological observations form the basis for developing more sophisticated models for the mechanical evolution of strike-slip shear zones during the earthquake cycle. We develop non-linear viscoelastic models, for which the brittle-ductile transition is not fixed, but varies with assumed temperature and calculated stress gradients. We use them to place constraints on strain rate at depth, on time-dependent rheology, and on the partitioning of deformation between brittle faulting and distributed viscous flow associated with the earthquake cycle.

  12. What Are the Shapes of Response Time Distributions in Visual Search?

    ERIC Educational Resources Information Center

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  13. Three-dimensional particle-particle simulations: Dependence of relaxation time on plasma parameter

    NASA Astrophysics Data System (ADS)

    Zhao, Yinjian

    2018-05-01

    A particle-particle simulation model is applied to investigate the dependence of the relaxation time on the plasma parameter in a three-dimensional unmagnetized plasma. It is found that the relaxation time increases linearly as the plasma parameter increases within the range of the plasma parameter from 2 to 10; when the plasma parameter equals 2, the relaxation time is independent of the total number of particles, but when the plasma parameter equals 10, the relaxation time slightly increases as the total number of particles increases, which indicates the transition of a plasma from collisional to collisionless. In addition, ions with initial Maxwell-Boltzmann (MB) distribution are found to stay in the MB distribution during the whole simulation time, and the mass of ions does not significantly affect the relaxation time of electrons. This work also shows the feasibility of the particle-particle model when using GPU parallel computing techniques.

  14. Time-dependent Hartree-Fock approach to nuclear ``pasta'' at finite temperature

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-05-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature. In addition, we propose the variance in the cell density distribution as a measure to distinguish pasta matter from uniform matter.

  15. M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey

    USGS Publications Warehouse

    Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.

    2016-01-01

    We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.

  16. Photoassociation dynamics driven by a modulated two-color laser field

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Zhao, Ze-Yu; Xie, Ting; Wang, Gao-Ren; Huang, Yin; Cong, Shu-Lin

    2011-11-01

    Photoassociation (PA) dynamics of ultracold cesium atoms steered by a modulated two-color laser field E(t)=E0f(t)cos((2π)/(Tp)-φ)cos(ωLt) is investigated theoretically by numerically solving the time-dependent Schrödinger equation. The PA dynamics is sensitive to the phase of envelope (POE) φ and the period of the envelope Tp, which indicates that it can be controlled by varying POE φ and period Tp. Moreover, we introduce the time- and frequency-resolved spectrum to illustrate how the POE φ and the period Tp influence the intensity distribution of the modulated laser pulse and hence change the time-dependent population distribution of photoassociated molecules. When the Gaussian envelope contains a few oscillations, the PA efficiency is also dependent on POE φ. The modulated two-color laser field is available in the current experiment based on laser mode-lock technology.

  17. Exact probability distribution functions for Parrondo's games

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Saakian, David B.; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  18. Exact probability distribution functions for Parrondo's games.

    PubMed

    Zadourian, Rubina; Saakian, David B; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  19. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K. S.; Nakae, L. F.; Prasad, M. K.

    Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less

  1. The statistics of relativistic electron pitch angle distribution in the Earth's radiation belt based on the Van Allen Probes measurements

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Freidel, R. H. W.; Chen, Y.; Henderson, M. G.; Kanekal, S. G.; Baker, D. N.; Spence, H. E.; Reeves, G. D.

    2015-12-01

    The relativistic electron pitch angle distribution (PAD) is an important characteristic of radiation belt electrons, which can give information on source or loss processes in a specific region. Using data from MagEIS and REPT instruments onboard the Van Allen Probes, a statistical survey of relativistic electron pitch angle distribution (PAD) is performed. By fitting relativistic electron PADs to Legendre polynomials, an empirical model of PADs as a function of L (from 1.4 to 6), MLT, electron energy (~100 keV - 5 MeV), and geomagnetic activity is developed and many intriguing features are found. In the outer radiation belt, an unexpected dawn/dusk asymmetry of ultra-relativistic electrons is found during quiet times, with the asymmetry becoming stronger at higher energies and at higher L shells. This may indicate the existence of physical processes acting on the relativistic electrons on the order of drift period, or be a signature of the partial ring current. In the inner belt and slot region, 100s of keV pitch angle distributions with minima at 90° are shown to be persistent in the inner belt and appears in the slot region during storm times. The model also shows clear energy dependence and L shell dependence of 90°-minimum pitch angle distribution. On the other hand, the head-and-shoulder pitch angle distributions are found during quiet times in the slot region, and the energy, L shell and geomagnetic activity dependence of those PADs are consistent with the wave-particle interaction caused by hiss waves.

  2. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang)

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t=0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  3. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  4. Directional solidification of a planar interface in the presence of a time-dependent electric current

    NASA Technical Reports Server (NTRS)

    Brush, L. N.; Coriell, S. R.; Mcfadden, G. B.

    1990-01-01

    Directional solidification of pure materials and binary alloys with a planar crystal-metal interface in the presence of a time-dependent electric current is considered. For a variety of time-dependent currents, the temperature fields and the interface velocity as functions of time are presented for indium antimonide and bismuth and for the binary alloys germanium-gallium and tin-bismuth. For the alloys, the solid composition is calculated as a function of position. Quantitative predictions are made of the effect of an electrical pulse on the solute distribution in the solidified material.

  5. Time-Dependent Hartree-Fock Approach to Nuclear Pasta at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-03-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature.

  6. Distribution of tunnelling times for quantum electron transport.

    PubMed

    Rudge, Samuel L; Kosov, Daniel S

    2016-03-28

    In electron transport, the tunnelling time is the time taken for an electron to tunnel out of a system after it has tunnelled in. We define the tunnelling time distribution for quantum processes in a dissipative environment and develop a practical approach for calculating it, where the environment is described by the general Markovian master equation. We illustrate the theory by using the rate equation to compute the tunnelling time distribution for electron transport through a molecular junction. The tunnelling time distribution is exponential, which indicates that Markovian quantum tunnelling is a Poissonian statistical process. The tunnelling time distribution is used not only to study the quantum statistics of tunnelling along the average electric current but also to analyse extreme quantum events where an electron jumps against the applied voltage bias. The average tunnelling time shows distinctly different temperature dependence for p- and n-type molecular junctions and therefore provides a sensitive tool to probe the alignment of molecular orbitals relative to the electrode Fermi energy.

  7. Spatially-Dependent Modelling of Pulsar Wind Nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-03-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multi-zone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially-dependent B-field, spatially-dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  8. Spatially dependent modelling of pulsar wind nebula G0.9+0.1

    NASA Astrophysics Data System (ADS)

    van Rensburg, C.; Krüger, P. P.; Venter, C.

    2018-07-01

    We present results from a leptonic emission code that models the spectral energy distribution of a pulsar wind nebula by solving a Fokker-Planck-type transport equation and calculating inverse Compton and synchrotron emissivities. We have created this time-dependent, multizone model to investigate changes in the particle spectrum as they traverse the pulsar wind nebula, by considering a time and spatially dependent B-field, spatially dependent bulk particle speed implying convection and adiabatic losses, diffusion, as well as radiative losses. Our code predicts the radiation spectrum at different positions in the nebula, yielding the surface brightness versus radius and the nebular size as function of energy. We compare our new model against more basic models using the observed spectrum of pulsar wind nebula G0.9+0.1, incorporating data from H.E.S.S. as well as radio and X-ray experiments. We show that simultaneously fitting the spectral energy distribution and the energy-dependent source size leads to more stringent constraints on several model parameters.

  9. Does Data Distribution Change as a Function of Motor Skill Practice?

    ERIC Educational Resources Information Center

    Yan, Jin H.; Rodriguez, Ward A.; Thomas, Jerry R.

    2005-01-01

    The purpose of this study was to determine whether data distribution changes as a result of motor skill practice or learning. The data on three dependent measures (movement time; MT), percentage of movement time in primary submovement (PSB), and movement jerk (JEK) were collected at baseline and practice Blocks 1 to 5. Sixty 6-year-olds,…

  10. 3DFEMWATER: A three-dimensional finite element model of water flow through saturated-unsaturated media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1987-08-01

    The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less

  11. Global distribution of neutral wind shear associated with sporadic E layers derived from GAIA

    NASA Astrophysics Data System (ADS)

    Shinagawa, H.; Miyoshi, Y.; Jin, H.; Fujiwara, H.

    2017-04-01

    There have been a number of papers reporting that the statistical occurrence rate of the sporadic E (Es) layer depends not only on the local time and season but also on the geographical location, implying that geographical and seasonal dependence in vertical neutral wind shear is one of the factors responsible for the geographical and seasonal dependence in Es layer occurrences rate. To study the role of neutral wind shear in the global distribution of the Es layer occurrence rate, we employ a self-consistent atmosphere-ionosphere coupled model called GAIA (Ground-to-topside model of Atmosphere and Ionosphere for Aeronomy), which incorporates meteorological reanalysis data in the lower atmosphere. The average distribution of neutral wind shear in the lower thermosphere is derived for the June-August and December-February periods, and the global distribution of vertical ion convergence is obtained to estimate the Es layer occurrence rate. It is found that the local and seasonal dependence of neutral wind shear is an important factor in determining the dependence of the Es layer occurrence rate on geographical distribution and seasonal variation. However, there are uncertainties in the simulated vertical neutral wind shears, which have larger scales than the observed wind shear scales. Furthermore, other processes such as localization of magnetic field distribution, background metallic ion distribution, ionospheric electric fields, and chemical processes of metallic ions are also likely to make an important contribution to geographical distribution and seasonal variation of the Es occurrence rate.

  12. Coercivity mechanisms and thermal stability of thin film magnetic recording media

    NASA Astrophysics Data System (ADS)

    Yang, Cheng

    1999-09-01

    Coercivity mechanisms and thermal stability of magnetic recording media were studied. It was found that magnetization reversal mainly occurs by nucleation mechanism. The correlation was established between the c/ a ratio of Co HCP structure and other process parameters that are thought to be the dominant factors in determining the anisotropy and therefore the coercivity of Co based thin film magnetic recording media. Time decay and switching of the magnetization in thin film magnetic recording media depend on the grain size distribution and easy-axis orientation distribution according to the proposed two- energy-level model. Relaxation time is the most fundamental parameter that determines the time decay performance of the magnetic recording media. An algorithm was proposed to calculate its distribution directly from the experimental data without any presumption. It was found for the first time that the distribution of relaxation time takes the form of Weibull distribution.

  13. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  14. Analytical solution of the transient temperature profile in gain medium of passively Q-switched microchip laser.

    PubMed

    Han, Xiahui; Li, Jianlang

    2014-11-01

    The transient temperature evolution in the gain medium of a continuous wave (CW) end-pumped passively Q-switched microchip (PQSM) laser is analyzed. By approximating the time-dependent population inversion density as a sawtooth function of time and treating the time-dependent pump absorption of a CW end-pumped PQSM laser as the superposition of an infinite series of short pumping pulses, the analytical expressions of transient temperature evolution and distribution in the gain medium for four- and three-level laser systems, respectively, are given. These analytical solutions are applied to evaluate the transient temperature evolution and distribution in the gain medium of CW end-pumped PQSM Nd:YAG and Yb:YAG lasers.

  15. Experimental Quantum-Walk Revival with a Time-Dependent Coin

    NASA Astrophysics Data System (ADS)

    Xue, P.; Zhang, R.; Qin, H.; Zhan, X.; Bian, Z. H.; Li, J.; Sanders, Barry C.

    2015-04-01

    We demonstrate a quantum walk with time-dependent coin bias. With this technique we realize an experimental single-photon one-dimensional quantum walk with a linearly ramped time-dependent coin flip operation and thereby demonstrate two periodic revivals of the walker distribution. In our beam-displacer interferometer, the walk corresponds to movement between discretely separated transverse modes of the field serving as lattice sites, and the time-dependent coin flip is effected by implementing a different angle between the optical axis of half-wave plate and the light propagation at each step. Each of the quantum-walk steps required to realize a revival comprises two sequential orthogonal coin-flip operators, with one coin having constant bias and the other coin having a time-dependent ramped coin bias, followed by a conditional translation of the walker.

  16. Local spectrum analysis of field propagation in an anisotropic medium. Part II. Time-dependent fields.

    PubMed

    Tinkelman, Igor; Melamed, Timor

    2005-06-01

    In Part I of this two-part investigation [J. Opt. Soc. Am. A 22, 1200 (2005)], we presented a theory for phase-space propagation of time-harmonic electromagnetic fields in an anisotropic medium characterized by a generic wave-number profile. In this Part II, these investigations are extended to transient fields, setting a general analytical framework for local analysis and modeling of radiation from time-dependent extended-source distributions. In this formulation the field is expressed as a superposition of pulsed-beam propagators that emanate from all space-time points in the source domain and in all directions. Using time-dependent quadratic-Lorentzian windows, we represent the field by a phase-space spectral distribution in which the propagating elements are pulsed beams, which are formulated by a transient plane-wave spectrum over the extended-source plane. By applying saddle-point asymptotics, we extract the beam phenomenology in the anisotropic environment resulting from short-pulsed processing. Finally, the general results are applied to the special case of uniaxial crystal and compared with a reference solution.

  17. Exploration properties of biased evanescent random walkers on a one-dimensional lattice

    NASA Astrophysics Data System (ADS)

    Esguerra, Jose Perico; Reyes, Jelian

    2017-08-01

    We investigate the combined effects of bias and evanescence on the characteristics of random walks on a one-dimensional lattice. We calculate the time-dependent return probability, eventual return probability, conditional mean return time, and the time-dependent mean number of visited sites of biased immortal and evanescent discrete-time random walkers on a one-dimensional lattice. We then extend the calculations to the case of a continuous-time step-coupled biased evanescent random walk on a one-dimensional lattice with an exponential waiting time distribution.

  18. Survival curve estimation with dependent left truncated data using Cox's model.

    PubMed

    Mackenzie, Todd

    2012-10-19

    The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.

  19. Understanding the amplitudes of noise correlation measurements

    USGS Publications Warehouse

    Tsai, Victor C.

    2011-01-01

    Cross correlation of ambient seismic noise is known to result in time series from which station-station travel-time measurements can be made. Part of the reason that these cross-correlation travel-time measurements are reliable is that there exists a theoretical framework that quantifies how these travel times depend on the features of the ambient noise. However, corresponding theoretical results do not currently exist to describe how the amplitudes of the cross correlation depend on such features. For example, currently it is not possible to take a given distribution of noise sources and calculate the cross correlation amplitudes one would expect from such a distribution. Here, we provide a ray-theoretical framework for calculating cross correlations. This framework differs from previous work in that it explicitly accounts for attenuation as well as the spatial distribution of sources and therefore can address the issue of quantifying amplitudes in noise correlation measurements. After introducing the general framework, we apply it to two specific problems. First, we show that we can quantify the amplitudes of coherency measurements, and find that the decay of coherency with station-station spacing depends crucially on the distribution of noise sources. We suggest that researchers interested in performing attenuation measurements from noise coherency should first determine how the dominant sources of noise are distributed. Second, we show that we can quantify the signal-to-noise ratio of noise correlations more precisely than previous work, and that these signal-to-noise ratios can be estimated for given situations prior to the deployment of seismometers. It is expected that there are applications of the theoretical framework beyond the two specific cases considered, but these applications await future work.

  20. Fast simulation of reconstructed phylogenies under global time-dependent birth-death processes.

    PubMed

    Höhna, Sebastian

    2013-06-01

    Diversification rates and patterns may be inferred from reconstructed phylogenies. Both the time-dependent and the diversity-dependent birth-death process can produce the same observed patterns of diversity over time. To develop and test new models describing the macro-evolutionary process of diversification, generic and fast algorithms to simulate under these models are necessary. Simulations are not only important for testing and developing models but play an influential role in the assessment of model fit. In the present article, I consider as the model a global time-dependent birth-death process where each species has the same rates but rates may vary over time. For this model, I derive the likelihood of the speciation times from a reconstructed phylogenetic tree and show that each speciation event is independent and identically distributed. This fact can be used to simulate efficiently reconstructed phylogenetic trees when conditioning on the number of species, the time of the process or both. I show the usability of the simulation by approximating the posterior predictive distribution of a birth-death process with decreasing diversification rates applied on a published bird phylogeny (family Cettiidae). The methods described in this manuscript are implemented in the R package TESS, available from the repository CRAN (http://cran.r-project.org/web/packages/TESS/). Supplementary data are available at Bioinformatics online.

  1. Quantitative assessment of building fire risk to life safety.

    PubMed

    Guanquan, Chu; Jinhua, Sun

    2008-06-01

    This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.

  2. Theory of electromagnetic cyclotron wave growth in a time-varying magnetoplasma

    NASA Technical Reports Server (NTRS)

    Gail, William B.

    1990-01-01

    The effect of a time-dependent perturbation in the magnetoplasma on the wave and particle populations is investigated using the Kennel-Petchek (1966) approach. Perturbations in the cold plasma density, energetic particle distribution, and resonance condition are calculated on the basis of the ideal MHD assumption given an arbitrary compressional magnetic field perturbation. An equation is derived describing the time-dependent growth rate for parallel propagating electromagnetic cyclotron waves in a time-varying magnetoplasma with perturbations superimposed on an equilibrium configuration.

  3. Waiting time distribution revealing the internal spin dynamics in a double quantum dot

    NASA Astrophysics Data System (ADS)

    Ptaszyński, Krzysztof

    2017-07-01

    Waiting time distribution and the zero-frequency full counting statistics of unidirectional electron transport through a double quantum dot molecule attached to spin-polarized leads are analyzed using the quantum master equation. The waiting time distribution exhibits a nontrivial dependence on the value of the exchange coupling between the dots and the gradient of the applied magnetic field, which reveals the oscillations between the spin states of the molecule. The zero-frequency full counting statistics, on the other hand, is independent of the aforementioned quantities, thus giving no insight into the internal dynamics. The fact that the waiting time distribution and the zero-frequency full counting statistics give a nonequivalent information is associated with two factors. Firstly, it can be explained by the sensitivity to different timescales of the dynamics of the system. Secondly, it is associated with the presence of the correlation between subsequent waiting times, which makes the renewal theory, relating the full counting statistics and the waiting time distribution, no longer applicable. The study highlights the particular usefulness of the waiting time distribution for the analysis of the internal dynamics of mesoscopic systems.

  4. Delay-time distribution in the scattering of time-narrow wave packets (II)—quantum graphs

    NASA Astrophysics Data System (ADS)

    Smilansky, Uzy; Schanz, Holger

    2018-02-01

    We apply the framework developed in the preceding paper in this series (Smilansky 2017 J. Phys. A: Math. Theor. 50 215301) to compute the time-delay distribution in the scattering of ultra short radio frequency pulses on complex networks of transmission lines which are modeled by metric (quantum) graphs. We consider wave packets which are centered at high wave number and comprise many energy levels. In the limit of pulses of very short duration we compute upper and lower bounds to the actual time-delay distribution of the radiation emerging from the network using a simplified problem where time is replaced by the discrete count of vertex-scattering events. The classical limit of the time-delay distribution is also discussed and we show that for finite networks it decays exponentially, with a decay constant which depends on the graph connectivity and the distribution of its edge lengths. We illustrate and apply our theory to a simple model graph where an algebraic decay of the quantum time-delay distribution is established.

  5. Dependency distance distribution - from the perspective of genre variation. Comment on "Dependency distance: a new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    NASA Astrophysics Data System (ADS)

    Wang, Yaqin

    2017-07-01

    Language can be regarded as a system where different components fit together, according to Saussure [1]. Likewise, the central axiom of synergetic linguistic is that language is a self-organized and self-adapting system. One of its main concerns is to view language as ;a psycho-social phenomenon and a biological-cognitive one at the same time; [2, 760]. Based on this assumption, Liu, Xu and Liang propose a novel approach, i.e., dependency distance, to study the general tendency hidden beneath diverse human languages [3]. As the authors describe in sections 1-3, variations within and between human languages all show the similar tendency towards dependency distance minimization (DDM). In sections 4-5, they introduce certain syntactic patterns related to both short and long dependency distances. However, the effect of genre seems to be given less sufficient attention by the authors. A study suggests that different distributions of closeness and degree centralities across genres can broaden the understanding of dependency distance distribution [4]. Another one shows that different genres have different parameters in terms of modeling dependency distance distribution [5]. Further research on the genre variation, therefore, can provide additional support for this issue.

  6. Time-sliced perturbation theory for large scale structure I: general formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less

  7. Time Variations in Forecasts and Occurrences of Large Solar Energetic Particle Events

    NASA Astrophysics Data System (ADS)

    Kahler, S. W.

    2015-12-01

    The onsets and development of large solar energetic (E > 10 MeV) particle (SEP) events have been characterized in many studies. The statistics of SEP event onset delay times from associated solar flares and coronal mass ejections (CMEs), which depend on solar source longitudes, can be used to provide better predictions of whether a SEP event will occur following a large flare or fast CME. In addition, size distributions of peak SEP event intensities provide a means for a probabilistic forecast of peak intensities attained in observed SEP increases. SEP event peak intensities have been compared with their rise and decay times for insight into the acceleration and transport processes. These two time scales are generally treated as independent parameters describing the development of a SEP event, but we can invoke an alternative two-parameter description based on the assumption that decay times exceed rise times for all events. These two parameters, from the well known Weibull distribution, provide an event description in terms of its basic shape and duration. We apply this distribution to several large SEP events and ask what the characteristic parameters and their dependence on source longitudes can tell us about the origins of these important events.

  8. Single-diffractive production of dijets within the kt-factorization approach

    NASA Astrophysics Data System (ADS)

    Łuszczak, Marta; Maciuła, Rafał; Szczurek, Antoni; Babiarz, Izabela

    2017-09-01

    We discuss single-diffractive production of dijets. The cross section is calculated within the resolved Pomeron picture, for the first time in the kt-factorization approach, neglecting transverse momentum of the Pomeron. We use Kimber-Martin-Ryskin unintegrated parton (gluon, quark, antiquark) distributions in both the proton as well as in the Pomeron or subleading Reggeon. The unintegrated parton distributions are calculated based on conventional mmht2014nlo parton distribution functions in the proton and H1 Collaboration diffractive parton distribution functions used previously in the analysis of diffractive structure function and dijets at HERA. For comparison, we present results of calculations performed within the collinear-factorization approach. Our results remain those obtained in the next-to-leading-order approach. The calculation is (must be) supplemented by the so-called gap survival factor, which may, in general, depend on kinematical variables. We try to describe the existing data from Tevatron and make detailed predictions for possible LHC measurements. Several differential distributions are calculated. The E¯T, η ¯ and xp ¯ distributions are compared with the Tevatron data. A reasonable agreement is obtained for the first two distributions. The last one requires introducing a gap survival factor which depends on kinematical variables. We discuss how the phenomenological dependence on one kinematical variable may influence dependence on other variables such as E¯T and η ¯. Several distributions for the LHC are shown.

  9. Uncertainty analysis in seismic tomography

    NASA Astrophysics Data System (ADS)

    Owoc, Bartosz; Majdański, Mariusz

    2017-04-01

    Velocity field from seismic travel time tomography depends on several factors like regularization, inversion path, model parameterization etc. The result also strongly depends on an initial velocity model and precision of travel times picking. In this research we test dependence on starting model in layered tomography and compare it with effect of picking precision. Moreover, in our analysis for manual travel times picking the uncertainty distribution is asymmetric. This effect is shifting the results toward faster velocities. For calculation we are using JIVE3D travel time tomographic code. We used data from geo-engineering and industrial scale investigations, which were collected by our team from IG PAS.

  10. Effects of Assuming Independent Component Failure Times, if They Actually Dependent, in a Series System.

    DTIC Science & Technology

    1984-10-26

    test for independence; ons i ser, -, of the poduct life estimator; dependent risks; 119 ASRACT Coniinue on ’wme-se f nereiary-~and iaen r~f> by Worst...the failure times associated with different failure - modes when we really should use a bivariate (or multivariate) distribution, then what is the...dependencies may be present, then what is the magnitude of the estimation error? S The third specific aim will attempt to obtain bounds on the

  11. Unified halo-independent formalism from convex hulls for direct dark matter searches

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2017-12-01

    Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.

  12. A double hit model for the distribution of time to AIDS onset

    NASA Astrophysics Data System (ADS)

    Chillale, Nagaraja Rao

    2013-09-01

    Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.

  13. Tests of nonuniversality of the stock return distributions in an emerging market

    NASA Astrophysics Data System (ADS)

    Mu, Guo-Hua; Zhou, Wei-Xing

    2010-12-01

    There is convincing evidence showing that the probability distributions of stock returns in mature markets exhibit power-law tails and both the positive and negative tails conform to the inverse cubic law. It supports the possibility that the tail exponents are universal at least for mature markets in the sense that they do not depend on stock market, industry sector, and market capitalization. We investigate the distributions of intraday returns at different time scales ( Δt=1 , 5, 15, and 30 min) of all the A-share stocks traded in the Chinese stock market, which is the largest emerging market in the world. We find that the returns can be well fitted by the q -Gaussian distribution and the tails have power-law relaxations with the exponents increasing with Δt and being well outside the Lévy stable regime for individual stocks. We provide statistically significant evidence showing that, at small time scales Δt<15min , the exponents logarithmically decrease with the turnover rate and increase with the market capitalization. When Δt>15min , no conclusive evidence is found for a possible dependence of the tail exponent on the turnover rate or the market capitalization. Our findings indicate that the intraday return distributions at small time scales are not universal in emerging stock markets but might be universal at large time scales.

  14. Symmetric co-movement between Malaysia and Japan stock markets

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2017-04-01

    The copula approach is a flexible tool known to capture linear, nonlinear, symmetric and asymmetric dependence between two or more random variables. It is often used as a co-movement measure between stock market returns. The information obtained from copulas such as the level of association of financial market during normal and bullish and bearish markets phases are useful for investment strategies and risk management. However, the study of co-movement between Malaysia and Japan markets are limited, especially using copulas. Hence, we aim to investigate the dependence structure between Malaysia and Japan capital markets for the period spanning from 2000 to 2012. In this study, we showed that the bivariate normal distribution is not suitable as the bivariate distribution or to present the dependence between Malaysia and Japan markets. Instead, Gaussian or normal copula was found a good fit to represent the dependence. From our findings, it can be concluded that simple distribution fitting such as bivariate normal distribution does not suit financial time series data, whose characteristics are often leptokurtic. The nature of the data is treated by ARMA-GARCH with heavy tail distributions and these can be associated with copula functions. Regarding the dependence structure between Malaysia and Japan markets, the findings suggest that both markets co-move concurrently during normal periods.

  15. Newton's second law and the multiplication of distributions

    NASA Astrophysics Data System (ADS)

    Sarrico, C. O. R.; Paiva, A.

    2018-01-01

    Newton's second law is applied to study the motion of a particle subjected to a time dependent impulsive force containing a Dirac delta distribution. Within this setting, we prove that this problem can be rigorously solved neither by limit processes nor by using the theory of distributions (limited to the classical Schwartz products). However, using a distributional multiplication, not defined by a limit process, a rigorous solution emerges.

  16. A STUDY ON TEMPORAL DISTRIBUTION OF FREIGHT TRANSPORTATION IN CONSIDERATION OF DAILY WORK-LIFE CYCLE

    NASA Astrophysics Data System (ADS)

    Kitaoka, Daiki; Hara, Hidetaka; Oeda, Yoshinao; Sumi, Tomonori

    As advanced freight service is demanded, the time related requirements fo r freight transportation becomes more and more significant. This study, focusing on temporal distribution of freight transportation responding to the travel time, developed a shipment departure time decision model for each item, aiming at quantitatively grasping social requirement in the time domain. The model takes account of the daily work cycle of both work cy cles of shippers and carriers along with the travel time. The proposed model has a similar structure as that derived from the previous studies taking account of the daily living cycle of individuals. This model properly reproduced temporal distribution of shipment departure time that changes depending on the length of necessary lead time for each item.

  17. Ultrafast carrier thermalization and cooling dynamics in few-layer MoS2.

    PubMed

    Nie, Zhaogang; Long, Run; Sun, Linfeng; Huang, Chung-Che; Zhang, Jun; Xiong, Qihua; Hewak, Daniel W; Shen, Zexiang; Prezhdo, Oleg V; Loh, Zhi-Heng

    2014-10-28

    Femtosecond optical pump-probe spectroscopy with 10 fs visible pulses is employed to elucidate the ultrafast carrier dynamics of few-layer MoS2. A nonthermal carrier distribution is observed immediately following the photoexcitation of the A and B excitonic transitions by the ultrashort, broadband laser pulse. Carrier thermalization occurs within 20 fs and proceeds via both carrier-carrier and carrier-phonon scattering, as evidenced by the observed dependence of the thermalization time on the carrier density and the sample temperature. The n(-0.37 ± 0.03) scaling of the thermalization time with carrier density suggests that equilibration of the nonthermal carrier distribution occurs via non-Markovian quantum kinetics. Subsequent cooling of the hot Fermi-Dirac carrier distribution occurs on the ∼ 0.6 ps time scale via carrier-phonon scattering. Temperature- and fluence-dependence studies reveal the involvement of hot phonons in the carrier cooling process. Nonadiabatic ab initio molecular dynamics simulations, which predict carrier-carrier and carrier-phonon scattering time scales of 40 fs and 0.5 ps, respectively, lend support to the assignment of the observed carrier dynamics.

  18. Charge state distributions of oxygen and carbon in the energy range 1 to 300 keV/e observed with AMPTE/CCE in the magnetosphere

    NASA Technical Reports Server (NTRS)

    Kremser, G.; Stuedemann, W.; Wilken, B.; Gloeckler, G.; Hamilton, D. C.

    1985-01-01

    Observations of charge state distributions of oxygen and carbon are presented that were obtained with the charge-energy-mass spectrometer onboard the AMPTE/CCE spacecraft. Data were selected for two different local time sectors (apogee at 1300 LT and 0300 LT, respectively), three L-ranges (4-6, 6-8, and greater than 8), and quiet to moderately disturbed days (Kp less than or equal to 4). The charge state distributions reveal the existence of all charge states of oxygen and carbon in the magnetosphere. The relative importance of the different charge states strongly depends on L and much less on local time. The observations confirm that the solar wind and the ionosphere contribute to the oxygen population, whereas carbon only originates from the solar wind. The L-dependence of the charge state distributions can be interpreted in terms of these different ion sources and of charge exchange and diffusion processes that largely influence the distribution of oxygen and carbon in the magnetosphere.

  19. Time-frequency analysis of backscattered signals from diffuse radar targets

    NASA Astrophysics Data System (ADS)

    Kenny, O. P.; Boashash, B.

    1993-06-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.

  20. The effects of plasma inhomogeneity on the nanoparticle coating in a low pressure plasma reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourali, N.; Foroutan, G.

    2015-10-15

    A self-consistent model is used to study the surface coating of a collection of charged nanoparticles trapped in the sheath region of a low pressure plasma reactor. The model consists of multi-fluid plasma sheath module, including nanoparticle dynamics, as well as the surface deposition and particle heating modules. The simulation results show that the mean particle radius increases with time and the nanoparticle size distribution is broadened. The mean radius is a linear function of time, while the variance exhibits a quadratic dependence. The broadening in size distribution is attributed to the spatial inhomogeneity of the deposition rate which inmore » turn depends on the plasma inhomogeneity. The spatial inhomogeneity of the ions has strong impact on the broadening of the size distribution, as the ions contribute both in the nanoparticle charging and in direct film deposition. The distribution width also increases with increasing of the pressure, gas temperature, and the ambient temperature gradient.« less

  1. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    NASA Astrophysics Data System (ADS)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  2. Diffusive real-time dynamics of a particle with Berry curvature

    NASA Astrophysics Data System (ADS)

    Misaki, Kou; Miyashita, Seiji; Nagaosa, Naoto

    2018-02-01

    We study theoretically the influence of Berry phase on the real-time dynamics of the single particle focusing on the diffusive dynamics, i.e., the time dependence of the distribution function. Our model can be applied to the real-time dynamics of intraband relaxation and diffusion of optically excited excitons, trions, or particle-hole pair. We found that the dynamics at the early stage is deeply influenced by the Berry curvature in real space (B ), momentum space (Ω ), and also the crossed space between these two (C ). For example, it is found that Ω induces the rotation of the wave packet and causes the time dependence of the mean square displacement of the particle to be linear in time t at the initial stage; it is qualitatively different from the t3 dependence in the absence of the Berry curvature. It is also found that Ω and C modify the characteristic time scale of the thermal equilibration of momentum distribution. Moreover, the dynamics under various combinations of B ,Ω , and C shows singular behaviors such as the critical slowing down or speeding up of the momentum equilibration and the reversals of the direction of rotations. The relevance of our model for time-resolved experiments in transition metal dichalcogenides is also discussed.

  3. Geometrical effects on the electron residence time in semiconductor nano-particles.

    PubMed

    Koochi, Hakimeh; Ebrahimi, Fatemeh

    2014-09-07

    We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ(r) in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r(2) model) or through the whole particle (r(3) model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW) simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ(r). It has been observed that by increasing the coordination number n, the average value of electron residence time, τ̅(r) rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ̅(r) is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ̅(r). Our simulations indicate that for volume distribution of traps, τ̅(r) scales as d(2). For a surface distribution of traps τ(r) increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.

  4. Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap

    NASA Astrophysics Data System (ADS)

    Muruganandam, P.; Adhikari, S. K.

    2009-10-01

    Here we develop simple numerical algorithms for both stationary and non-stationary solutions of the time-dependent Gross-Pitaevskii (GP) equation describing the properties of Bose-Einstein condensates at ultra low temperatures. In particular, we consider algorithms involving real- and imaginary-time propagation based on a split-step Crank-Nicolson method. In a one-space-variable form of the GP equation we consider the one-dimensional, two-dimensional circularly-symmetric, and the three-dimensional spherically-symmetric harmonic-oscillator traps. In the two-space-variable form we consider the GP equation in two-dimensional anisotropic and three-dimensional axially-symmetric traps. The fully-anisotropic three-dimensional GP equation is also considered. Numerical results for the chemical potential and root-mean-square size of stationary states are reported using imaginary-time propagation programs for all the cases and compared with previously obtained results. Also presented are numerical results of non-stationary oscillation for different trap symmetries using real-time propagation programs. A set of convenient working codes developed in Fortran 77 are also provided for all these cases (twelve programs in all). In the case of two or three space variables, Fortran 90/95 versions provide some simplification over the Fortran 77 programs, and these programs are also included (six programs in all). Program summaryProgram title: (i) imagetime1d, (ii) imagetime2d, (iii) imagetime3d, (iv) imagetimecir, (v) imagetimesph, (vi) imagetimeaxial, (vii) realtime1d, (viii) realtime2d, (ix) realtime3d, (x) realtimecir, (xi) realtimesph, (xii) realtimeaxial Catalogue identifier: AEDU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 122 907 No. of bytes in distributed program, including test data, etc.: 609 662 Distribution format: tar.gz Programming language: FORTRAN 77 and Fortran 90/95 Computer: PC Operating system: Linux, Unix RAM: 1 GByte (i, iv, v), 2 GByte (ii, vi, vii, x, xi), 4 GByte (iii, viii, xii), 8 GByte (ix) Classification: 2.9, 4.3, 4.12 Nature of problem: These programs are designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-, two- or three-space dimensions with a harmonic, circularly-symmetric, spherically-symmetric, axially-symmetric or anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Solution method: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation, in either imaginary or real time, over small time steps. The method yields the solution of stationary and/or non-stationary problems. Additional comments: This package consists of 12 programs, see "Program title", above. FORTRAN77 versions are provided for each of the 12 and, in addition, Fortran 90/95 versions are included for ii, iii, vi, viii, ix, xii. For the particular purpose of each program please see the below. Running time: Minutes on a medium PC (i, iv, v, vii, x, xi), a few hours on a medium PC (ii, vi, viii, xii), days on a medium PC (iii, ix). Program summary (1)Title of program: imagtime1d.F Title of electronic file: imagtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (2)Title of program: imagtimecir.F Title of electronic file: imagtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (3)Title of program: imagtimesph.F Title of electronic file: imagtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (4)Title of program: realtime1d.F Title of electronic file: realtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (5)Title of program: realtimecir.F Title of electronic file: realtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (6)Title of program: realtimesph.F Title of electronic file: realtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (7)Title of programs: imagtimeaxial.F and imagtimeaxial.f90 Title of electronic file: imagtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (8)Title of program: imagtime2d.F and imagtime2d.f90 Title of electronic file: imagtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (9)Title of program: realtimeaxial.F and realtimeaxial.f90 Title of electronic file: realtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (10)Title of program: realtime2d.F and realtime2d.f90 Title of electronic file: realtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (11)Title of program: imagtime3d.F and imagtime3d.f90 Title of electronic file: imagtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (12)Title of program: realtime3d.F and realtime3d.f90 Title of electronic file: realtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum Ram Memory: 8 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems.

  5. Analytical thermal model for end-pumped solid-state lasers

    NASA Astrophysics Data System (ADS)

    Cini, L.; Mackenzie, J. I.

    2017-12-01

    Fundamentally power-limited by thermal effects, the design challenge for end-pumped "bulk" solid-state lasers depends upon knowledge of the temperature gradients within the gain medium. We have developed analytical expressions that can be used to model the temperature distribution and thermal-lens power in end-pumped solid-state lasers. Enabled by the inclusion of a temperature-dependent thermal conductivity, applicable from cryogenic to elevated temperatures, typical pumping distributions are explored and the results compared with accepted models. Key insights are gained through these analytical expressions, such as the dependence of the peak temperature rise in function of the boundary thermal conductance to the heat sink. Our generalized expressions provide simple and time-efficient tools for parametric optimization of the heat distribution in the gain medium based upon the material and pumping constraints.

  6. Photoassociation dynamics driven by a modulated two-color laser field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Wei; Zhao Zeyu; Xie Ting

    2011-11-15

    Photoassociation (PA) dynamics of ultracold cesium atoms steered by a modulated two-color laser field E(t)=E{sub 0}f(t)cos((2{pi}/T{sub p})-{phi})cos({omega}{sub L}t) is investigated theoretically by numerically solving the time-dependent Schroedinger equation. The PA dynamics is sensitive to the phase of envelope (POE) {phi} and the period of the envelope T{sub p}, which indicates that it can be controlled by varying POE {phi} and period T{sub p}. Moreover, we introduce the time- and frequency-resolved spectrum to illustrate how the POE {phi} and the period T{sub p} influence the intensity distribution of the modulated laser pulse and hence change the time-dependent population distribution of photoassociatedmore » molecules. When the Gaussian envelope contains a few oscillations, the PA efficiency is also dependent on POE {phi}. The modulated two-color laser field is available in the current experiment based on laser mode-lock technology.« less

  7. Determination of efficiencies, loss mechanisms, and performance degradation factors in chopper controlled dc vehical motors. Section 2: The time dependent finite element modeling of the electromagnetic field in electrical machines: Methods and applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hamilton, H. B.; Strangas, E.

    1980-01-01

    The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.

  8. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  9. Dynamical Aspects of Quasifission Process in Heavy-Ion Reactions

    NASA Astrophysics Data System (ADS)

    Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.

    2015-06-01

    The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. For all the reactions the main component of the distributions corresponds to asymmetrical mass division typical for asymmetric quasifission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.

  10. Time distributions of solar energetic particle events: Are SEPEs really random?

    NASA Astrophysics Data System (ADS)

    Jiggens, P. T. A.; Gabriel, S. B.

    2009-10-01

    Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.

  11. [Three-dimensional stress analysis of periodontal ligament of mandible incisors fixed bridge abutments under dynamic loads by finite element method].

    PubMed

    Ma, Da; Tang, Liang; Pan, Yan-Huan

    2007-12-01

    Three-dimensional finite method was used to analyze stress and strain distributions of periodontal ligament of abutments under dynamic loads. Finite element analysis was performed on the model under dynamic loads with vertical and oblique directions. The stress and strain distributions and stress-time curves were analyzed to study the biomechanical behavior of periodontal ligament of abutments. The stress and strain distributions of periodontal ligament under dynamic load were same with the static load. But the maximum stress and strain decreased apparently. The rate of change was between 60%-75%. The periodontal ligament had time-dependent mechanical behaviors. Some level of residual stress in periodontal ligament was left after one mastication period. The stress-free time under oblique load was shorter than that of vertical load. The maximum stress and strain decrease apparently under dynamic loads. The periodontal ligament has time-dependent mechanical behaviors during one mastication. There is some level of residual stress left after one mastication period. The level of residual stress is related to the magnitude and the direction of loads. The direction of applied loads is one important factor that affected the stress distribution and accumulation and release of abutment periodontal ligament.

  12. Skewness in large-scale structure and non-Gaussian initial conditions

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Scherrer, Robert J.

    1994-01-01

    We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.

  13. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  14. On the dynamic dependence and asymmetric co-movement between the US and Central and Eastern European transition markets

    NASA Astrophysics Data System (ADS)

    Boubaker, Heni; Raza, Syed Ali

    2016-10-01

    In this paper, we attempt to evaluate the time-varying and asymmetric co-movement of CEE equity markets with the US stock markets around the subprime crisis and the resulting global financial crisis. The econometric approach adopted is based on recent development of time-varying copulas. For that, we propose a new class of time-varying copulas that allows for long memory behavior in both marginal and joint distributions. Our empirical approach relies on the flexibility and usefulness of bivariate copulas that allow to model not only the dynamic co-movement through time but also to account for any extreme interaction, nonlinearity and asymmetry in the co-movement patterns. The time-varying dependence structure can be also modeled conditionally on the economic policy uncertainty index of the crisis country. Empirical results show strong evidence of co-movement between the US and CEE equity markets and find that the co-movement exhibits large time-variations and asymmetry in the tails of the return distributions.

  15. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  16. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  17. Precipitation of energetic neutral atoms and induced non-thermal escape fluxes from the Martian atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewkow, N. R.; Kharchenko, V.

    2014-08-01

    The precipitation of energetic neutral atoms, produced through charge exchange collisions between solar wind ions and thermal atmospheric gases, is investigated for the Martian atmosphere. Connections between parameters of precipitating fast ions and resulting escape fluxes, altitude-dependent energy distributions of fast atoms and their coefficients of reflection from the Mars atmosphere, are established using accurate cross sections in Monte Carlo (MC) simulations. Distributions of secondary hot (SH) atoms and molecules, induced by precipitating particles, have been obtained and applied for computations of the non-thermal escape fluxes. A new collisional database on accurate energy-angular-dependent cross sections, required for description of themore » energy-momentum transfer in collisions of precipitating particles and production of non-thermal atmospheric atoms and molecules, is reported with analytic fitting equations. Three-dimensional MC simulations with accurate energy-angular-dependent cross sections have been carried out to track large ensembles of energetic atoms in a time-dependent manner as they propagate into the Martian atmosphere and transfer their energy to the ambient atoms and molecules. Results of the MC simulations on the energy-deposition altitude profiles, reflection coefficients, and time-dependent atmospheric heating, obtained for the isotropic hard sphere and anisotropic quantum cross sections, are compared. Atmospheric heating rates, thermalization depths, altitude profiles of production rates, energy distributions of SH atoms and molecules, and induced escape fluxes have been determined.« less

  18. Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Steinsland, Ingelin

    2014-05-01

    This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.

  19. Bayesian explorations of fault slip evolution over the earthquake cycle

    NASA Astrophysics Data System (ADS)

    Duputel, Z.; Jolivet, R.; Benoit, A.; Gombert, B.

    2017-12-01

    The ever-increasing amount of geophysical data continuously opens new perspectives on fundamental aspects of the seismogenic behavior of active faults. In this context, the recent fleet of SAR satellites including Sentinel-1 and COSMO-SkyMED permits the use of InSAR for time-dependent slip modeling with unprecedented resolution in time and space. However, existing time-dependent slip models rely on spatial smoothing regularization schemes, which can produce unrealistically smooth slip distributions. In addition, these models usually do not include uncertainty estimates thereby reducing the utility of such estimates. Here, we develop an entirely new approach to derive probabilistic time-dependent slip models. This Markov-Chain Monte Carlo method involves a series of transitional steps to predict and update posterior Probability Density Functions (PDFs) of slip as a function of time. We assess the viability of our approach using various slow-slip event scenarios. Using a dense set of SAR images, we also use this method to quantify the spatial distribution and temporal evolution of slip along a creeping segment of the North Anatolian Fault. This allows us to track a shallow aseismic slip transient lasting for about a month with a maximum slip of about 2 cm.

  20. Measures of dependence for multivariate Lévy distributions

    NASA Astrophysics Data System (ADS)

    Boland, J.; Hurd, T. R.; Pivato, M.; Seco, L.

    2001-02-01

    Recent statistical analysis of a number of financial databases is summarized. Increasing agreement is found that logarithmic equity returns show a certain type of asymptotic behavior of the largest events, namely that the probability density functions have power law tails with an exponent α≈3.0. This behavior does not vary much over different stock exchanges or over time, despite large variations in trading environments. The present paper proposes a class of multivariate distributions which generalizes the observed qualities of univariate time series. A new consequence of the proposed class is the "spectral measure" which completely characterizes the multivariate dependences of the extreme tails of the distribution. This measure on the unit sphere in M-dimensions, in principle completely general, can be determined empirically by looking at extreme events. If it can be observed and determined, it will prove to be of importance for scenario generation in portfolio risk management.

  1. Spatial distribution on high-order-harmonic generation of an H2+ molecule in intense laser fields

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Ge, Xin-Lei; Wang, Tian; Xu, Tong-Tong; Guo, Jing; Liu, Xue-Shen

    2015-07-01

    High-order-harmonic generation (HHG) for the H2 + molecule in a 3-fs, 800-nm few-cycle Gaussian laser pulse combined with a static field is investigated by solving the one-dimensional electronic and one-dimensional nuclear time-dependent Schrödinger equation within the non-Born-Oppenheimer approximation. The spatial distribution in HHG is demonstrated and the results present the recombination process of the electron with the two nuclei, respectively. The spatial distribution of the HHG spectra shows that there is little possibility of the recombination of the electron with the nuclei around the origin z =0 a.u. and equilibrium internuclear positions z =±1.3 a.u. This characteristic is irrelevant to laser parameters and is only attributed to the molecular structure. Furthermore, we investigate the time-dependent electron-nuclear wave packet and ionization probability to further explain the underlying physical mechanism.

  2. Frequency distributions and correlations of solar X-ray flare parameters

    NASA Technical Reports Server (NTRS)

    Crosby, Norma B.; Aschwanden, Markus J.; Dennis, Brian R.

    1993-01-01

    Frequency distributions of flare parameters are determined from over 12,000 solar flares. The flare duration, the peak counting rate, the peak hard X-ray flux, the total energy in electrons, and the peak energy flux in electrons are among the parameters studied. Linear regression fits, as well as the slopes of the frequency distributions, are used to determine the correlations between these parameters. The relationship between the variations of the frequency distributions and the solar activity cycle is also investigated. Theoretical models for the frequency distribution of flare parameters are dependent on the probability of flaring and the temporal evolution of the flare energy build-up. The results of this study are consistent with stochastic flaring and exponential energy build-up. The average build-up time constant is found to be 0.5 times the mean time between flares.

  3. Chapman Enskog-maximum entropy method on time-dependent neutron transport equation

    NASA Astrophysics Data System (ADS)

    Abdou, M. A.

    2006-09-01

    The time-dependent neutron transport equation in semi and infinite medium with linear anisotropic and Rayleigh scattering is proposed. The problem is solved by means of the flux-limited, Chapman Enskog-maximum entropy for obtaining the solution of the time-dependent neutron transport. The solution gives the neutron distribution density function which is used to compute numerically the radiant energy density E(x,t), net flux F(x,t) and reflectivity Rf. The behaviour of the approximate flux-limited maximum entropy neutron density function are compared with those found by other theories. Numerical calculations for the radiant energy, net flux and reflectivity of the proposed medium are calculated at different time and space.

  4. Gamma time-dependency in Blaxter's compartmental model.

    NASA Technical Reports Server (NTRS)

    Matis, J. H.

    1972-01-01

    A new two-compartment model for the passage of particles through the gastro-intestinal tract of ruminants is proposed. In this model, a gamma distribution of lifetimes is introduced in the first compartment; thereby, passage from that compartment becomes time-dependent. This modification is strongly suggested by the physical alteration which certain substances, e.g. hay particles, undergo in the digestive process. The proposed model is applied to experimental data.

  5. Turbidity as a probe of tubulin polymerization kinetics: a theoretical and experimental re-examination.

    PubMed

    Hall, Damien; Minton, Allen P

    2005-10-15

    We report here an examination of the validity of the experimental practice of using solution turbidity to study the polymerization kinetics of microtubule formation. The investigative approach proceeds via numerical solution of model rate equations to yield the time dependence of each microtubule species, followed by the calculation of the time- and wavelength-dependent turbidity generated by the calculated distribution of rod lengths. The wavelength dependence of the turbidity along the time course is analyzed to search for generalized kinetic regimes that satisfy a constant proportionality relationship between the observed turbidity and the weight concentration of polymerized tubulin. An empirical analysis, which permits valid interpretation of turbidity data for distributions of microtubules that are not long relative to the wavelength of incident light, is proposed. The basic correctness of the simulation work is shown by the analysis of the experimental time dependence of the turbidity wavelength exponent for microtubule formation in taxol-supplemented 0.1 M Pipes buffer (1 mM GTP, 1 mM EGTA, 1 mM MgSO4, pH 6.4). We believe that the general findings and principles outlined here are applicable to studies of other fibril-forming systems that use turbidity as a marker of polymerization progress.

  6. A New Technique for Measuring Concentration Dependence of Self and Collective Diffusivity by using a Single Sample

    NASA Astrophysics Data System (ADS)

    Sirorattanakul, Krittanon; Shen, Chong; Ou-Yang, Daniel

    Diffusivity governs the dynamics of interacting particles suspended in a solvent. At high particle concentration, the interactions between particles become non-negligible, making the values of self and collective diffusivity diverge and concentration-dependent. Conventional methods for measuring this dependency, such as forced Rayleigh scattering, fluorescence correlation spectroscopy (FCS), and dynamic light scattering (DLS) require preparation of multiple samples. We present a new technique to measure this dependency by using only a single sample. Dielectrophoresis (DEP) is used to create concentration gradient in the solution. Across this concentration distribution, we use FCS to measure the concentration-dependent self diffusivity. Then, we switch off DEP to allow the particles to diffuse back to equilibrium. We obtain the time series of concentration distribution from fluorescence microscopy and use them to determine the concentration-dependent collective diffusivity. We compare the experimental results with computer simulations to verify the validity of this technique. Time and spatial resolution limits of FCS and imaging are also analyzed to estimate the limitation of the proposed technique. NSF DMR-0923299, Lehigh College of Arts and Sciences Undergraduate Research Grant, Lehigh Department of Physics, Emulsion Polymers Institute.

  7. Laminar microvascular transit time distribution in the mouse somatosensory cortex revealed by Dynamic Contrast Optical Coherence Tomography

    PubMed Central

    Merkle, Conrad W.; Srinivasan, Vivek J.

    2015-01-01

    The transit time distribution of blood through the cerebral microvasculature both constrains oxygen delivery and governs the kinetics of neuroimaging signals such as blood-oxygen-level-dependent functional Magnetic Resonance Imaging (BOLD fMRI). However, in spite of its importance, capillary transit time distribution has been challenging to quantify comprehensively and efficiently at the microscopic level. Here, we introduce a method, called Dynamic Contrast Optical Coherence Tomography (DyC-OCT), based on dynamic cross-sectional OCT imaging of an intravascular tracer as it passes through the field-of-view. Quantitative transit time metrics are derived from temporal analysis of the dynamic scattering signal, closely related to tracer concentration. Since DyC-OCT does not require calibration of the optical focus, quantitative accuracy is achieved even deep in highly scattering brain tissue where the focal spot degrades. After direct validation of DyC-OCT against dilution curves measured using a fluorescent plasma label in surface pial vessels, we used DyC-OCT to investigate the transit time distribution in microvasculature across the entire depth of the mouse somatosensory cortex. Laminar trends were identified, with earlier transit times and less heterogeneity in the middle cortical layers. The early transit times in the middle cortical layers may explain, at least in part, the early BOLD fMRI onset times observed in these layers. The layer-dependencies in heterogeneity may help explain how a single vascular supply manages to deliver oxygen to individual cortical layers with diverse metabolic needs. PMID:26477654

  8. Laminar microvascular transit time distribution in the mouse somatosensory cortex revealed by Dynamic Contrast Optical Coherence Tomography.

    PubMed

    Merkle, Conrad W; Srinivasan, Vivek J

    2016-01-15

    The transit time distribution of blood through the cerebral microvasculature both constrains oxygen delivery and governs the kinetics of neuroimaging signals such as blood-oxygen-level-dependent functional Magnetic Resonance Imaging (BOLD fMRI). However, in spite of its importance, capillary transit time distribution has been challenging to quantify comprehensively and efficiently at the microscopic level. Here, we introduce a method, called Dynamic Contrast Optical Coherence Tomography (DyC-OCT), based on dynamic cross-sectional OCT imaging of an intravascular tracer as it passes through the field-of-view. Quantitative transit time metrics are derived from temporal analysis of the dynamic scattering signal, closely related to tracer concentration. Since DyC-OCT does not require calibration of the optical focus, quantitative accuracy is achieved even deep in highly scattering brain tissue where the focal spot degrades. After direct validation of DyC-OCT against dilution curves measured using a fluorescent plasma label in surface pial vessels, we used DyC-OCT to investigate the transit time distribution in microvasculature across the entire depth of the mouse somatosensory cortex. Laminar trends were identified, with earlier transit times and less heterogeneity in the middle cortical layers. The early transit times in the middle cortical layers may explain, at least in part, the early BOLD fMRI onset times observed in these layers. The layer-dependencies in heterogeneity may help explain how a single vascular supply manages to deliver oxygen to individual cortical layers with diverse metabolic needs. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A GIS-based time-dependent seismic source modeling of Northern Iran

    NASA Astrophysics Data System (ADS)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  10. On the electrostatic deceleration of argon atoms in high Rydberg states by time-dependent inhomogeneous electric fields

    NASA Astrophysics Data System (ADS)

    Vliegen, E.; Merkt, F.

    2005-06-01

    Argon atoms in a pulsed supersonic expansion are prepared in selected Stark components of Rydberg states with effective principal quantum number in the range n* = 15-25. When traversing regions of inhomogeneous electric fields, these atoms get accelerated or decelerated depending on whether the Stark states are low- or high-field seeking states. Using a compact electrode design, which enables the application of highly inhomogeneous and time-dependent electric fields, the Rydberg atoms experience kinetic energy changes of up to 1.2 × 10-21 J (i.e. 60 cm-1 in spectroscopic units) in a single acceleration/deceleration stage of 3 mm length. The resulting differences in the velocities of the low- and high-field seeking states are large enough that the corresponding distributions of times of flight to the Rydberg particle detector are fully separated. As a result, efficient spectral searches of the Rydberg states best suited for acceleration/deceleration experiments are possible. Numerical simulations of the particle trajectories are used to analyse the time-of-flight distributions and to optimize the time dependence of the inhomogeneous electric fields. The decay of the Rydberg states by fluorescence, collisions and transitions induced by black-body radiation takes place on a timescale long enough not to interfere significantly with the deceleration during the first ~5 µs.

  11. Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method

    NASA Astrophysics Data System (ADS)

    Lazoglou, Georgia; Anagnostopoulou, Christina

    2018-03-01

    This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.

  12. A Dislocation Model of Seismic Wave Attenuation and Micro-creep in the Earth: Harold Jeffreys and the Rheology of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Karato, S.

    A microphysical model of seismic wave attenuation is developed to provide a physical basis to interpret temperature and frequency dependence of seismic wave attenuation. The model is based on the dynamics of dislocation motion in minerals with a high Peierls stress. It is proposed that most of seismic wave attenuation occurs through the migration of geometrical kinks (micro-glide) and/or nucleation/migration of an isolated pair of kinks (Bordoni peak), whereas the long-term plastic deformation involves the continuing nucleation and migration of kinks (macro-glide). Kink migration is much easier than kink nucleation, and this provides a natural explanation for the vast difference in dislocation mobility between seismic and geological time scales. The frequency and temperature dependences of attenuation depend on the geometry and dynamics of dislocation motion both of which affect the distribution of relaxation times. The distribution of relaxation times is largely controlled by the distribution in distance between pinning points of dislocations, L, and the observed frequency dependence of Q, Q, Q ωα is shown to require a distribution function of P(L) L-m with m=4-2α The activation energy of Q-1 in minerals with a high Peierls stress corresponds to that for kink nucleation and is similar to that of long-term creep. The observed large lateral variation in Q-1 strongly suggests that the Q-1 in the mantle is frequency dependent. Micro-deformation with high dislocation mobility will (temporarily) cease when all the geometrical kinks are exhausted. For a typical dislocation density of 108 m-2, transient creep with small viscosity related to seismic wave attenuation will persist up to the strain of 10-6, thus even a small strain ( 10-6-10-4) process such as post-glacial rebound is only marginally affected by this type of anelastic relaxation. At longer time scales continuing nucleation of kinks becomes important and enables indefinitely large strain, steady-state creep, causing viscous behavior.

  13. Spatial distribution of Cherenkov light from cascade showers in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khomyakov, V. A., E-mail: VAKhomyakov@mephi.ru; Bogdanov, A. G.; Kindin, V. V.

    2016-12-15

    The spatial distribution of the Cherenkov light generated by cascade showers is analyzed using the NEVOD Cherenkov water detector. The dependence of the Cherenkov light intensity on the depth of shower development at various distances from the shower axis is investigated for the first time. The experimental data are compared with the Cherenkov light distributions predicted by various models for the scattering of cascade particles.

  14. Teachers' Perceptions of the Relationship between Inclusive Education and Distributed Leadership in Two Primary Schools in Slovakia and New South Wales (Australia)

    ERIC Educational Resources Information Center

    Miškolci, Jozef; Armstrong, Derrick; Spandagou, Ilektra

    2016-01-01

    The academic literature on the practice of inclusive education presents diverse and at times contradictory perspectives in how it is connected to practices of distributed leadership. Depending on the approach, on the one hand, inclusive educational practice may enable distributed school leadership, while on the other hand, it may allow for…

  15. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    PubMed Central

    Albers, D. J.; Hripcsak, George

    2012-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009

  16. On the similarity of theories of anelastic and scattering attenuation

    USGS Publications Warehouse

    Wennerberg, Leif; Frankel, Arthur D.

    1989-01-01

    We point out basic parallels between theories of anelastic and scattering attenuation. We consider approximations to scattering effects presented by O'Doherty and Anstey (1971), Sato (1982), and Wu (1982). We use the linear theory of anelasticity. We note that the frequency dependence of Q can be related to a distribution of scales of physical properties of the medium. The frequency dependence of anelastic Q is related to the distribution of relaxation times in exactly the same manner as the frequency dependence of scattering Q is related to the distribution of scatterer sizes. Thus, the well-known difficulty of separating scattering from intrinsic attenuation is seen from this point of view as a consequence of the fact that certain observables can be interpreted by identical equations resulting from either of two credible physical theories describing fundamentally different processes. -from Authors

  17. Transcriptional dynamics with time-dependent reaction rates

    NASA Astrophysics Data System (ADS)

    Nandi, Shubhendu; Ghosh, Anandamohan

    2015-02-01

    Transcription is the first step in the process of gene regulation that controls cell response to varying environmental conditions. Transcription is a stochastic process, involving synthesis and degradation of mRNAs, that can be modeled as a birth-death process. We consider a generic stochastic model, where the fluctuating environment is encoded in the time-dependent reaction rates. We obtain an exact analytical expression for the mRNA probability distribution and are able to analyze the response for arbitrary time-dependent protocols. Our analytical results and stochastic simulations confirm that the transcriptional machinery primarily act as a low-pass filter. We also show that depending on the system parameters, the mRNA levels in a cell population can show synchronous/asynchronous fluctuations and can deviate from Poisson statistics.

  18. Cryptographic robustness of a quantum cryptography system using phase-time coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molotkov, S. N.

    2008-01-15

    A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less

  19. Persistent-random-walk approach to anomalous transport of self-propelled particles

    NASA Astrophysics Data System (ADS)

    Sadjadi, Zeinab; Shaebani, M. Reza; Rieger, Heiko; Santen, Ludger

    2015-06-01

    The motion of self-propelled particles is modeled as a persistent random walk. An analytical framework is developed that allows the derivation of exact expressions for the time evolution of arbitrary moments of the persistent walk's displacement. It is shown that the interplay of step length and turning angle distributions and self-propulsion produces various signs of anomalous diffusion at short time scales and asymptotically a normal diffusion behavior with a broad range of diffusion coefficients. The crossover from the anomalous short-time behavior to the asymptotic diffusion regime is studied and the parameter dependencies of the crossover time are discussed. Higher moments of the displacement distribution are calculated and analytical expressions for the time evolution of the skewness and the kurtosis of the distribution are presented.

  20. Time dependence of 222Rn, 220Rn and their progenies' distributions in a diffusion chamber

    NASA Astrophysics Data System (ADS)

    Stevanovic, N.; Markovic, V. M.; Nikezic, D.

    2017-11-01

    Diffusion chamber with SSNTD (Solid State Nuclear Track Detector) placed inside is a passive detector for measuring the activity of 222Rn and 220Rn (radon and thoron) and their progenies. Calibration from detected alpha particle tracks to progeny activity is often acquired from theoretical models. One common assumption related to these models found in literature is that concentrations of 222Rn and 220Rn at the entrance of a chamber are constant during the exposure. In this paper, concentrations of 222Rn and 220Rn at the entrance of the chamber are taken to be variable with time, which is actually the case in reality. Therefore, spatial distributions of 222Rn and 220Rn and their progenies inside the diffusion chamber should be time dependent. Variation of 222Rn and 220Rn concentrations on the entrance of the chamber was modeled on the basis of true measurements. Diffusion equations in cylindrical coordinates were solved using FDM (Finite Difference Method) to obtain spatial distributions as functions of time. It was shown that concentrations of 222Rn, 220Rn and their progenies were not homogeneously distributed in the chamber. Due to variable 222Rn and 220Rn concentrations at the entrance of the chamber, steady state (the case when concentration of 222Rn, 220Rn and their progenies inside the chamber remains unchanged with time) could not be reached. Deposition of progenies on the chamber walls was considered and it was shown that distributions of deposited progenies were not uniform over walls' surface.

  1. Quantifying the distribution of tracer discharge from boreal catchments under transient flow using the kinematic pathway approach

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.

    2017-07-01

    This focuses on solute discharge from boreal catchments with relatively shallow groundwater table and topography-driven groundwater flow. We explore whether a simplified semianalytical approach can be used for predictive modeling of the statistical distribution of tracer discharge. The approach is referred to as the "kinematic pathways approach" (KPA). This approach uses hydrological and tracer inputs and topographical and hydrogeological information; the latter regards average aquifer depth to the less permeable bedrock. A characteristic velocity of water flow through the catchment is further obtained from the overall water balance in the catchment. For the waterborne tracer transport through the catchment, morphological dispersion is accounted for by topographical analysis of the distribution of pathway lengths to the catchment outlet. Macrodispersion is accounted for heuristically by assuming an effective Péclet number. Distribution of water travel times through the catchment reflect the dispersion on both levels and are derived in both a forward mode (transit time from input to outlet) and a backward mode (water age when arriving at outlet arrival). The forward distribution of water travel times is further used for the tracer discharge modeling by convolution. The approach is applied to modeling of a 23 year long chloride data series for a specific catchment Kringlan (Sweden), and for generic modeling to better understand the dependence of the tracer discharge distribution on different dispersion aspects. The KPA is found to provide reasonable estimates of tracer discharge distribution, and particularly of extreme values, depending on method for determining the pathway length distribution. As a possible alternative analytical model of tracer transport through a catchment, the reservoir approach generally results in large tracer dispersion. This implies that tracer discharge distributions obtained from a mixed reservoir approach and from KPA are only compatible under large dispersion conditions.

  2. Estimation of value at risk in currency exchange rate portfolio using asymmetric GJR-GARCH Copula

    NASA Astrophysics Data System (ADS)

    Nurrahmat, Mohamad Husein; Noviyanti, Lienda; Bachrudin, Achmad

    2017-03-01

    In this study, we discuss the problem in measuring the risk in a portfolio based on value at risk (VaR) using asymmetric GJR-GARCH Copula. The approach based on the consideration that the assumption of normality over time for the return can not be fulfilled, and there is non-linear correlation for dependent model structure among the variables that lead to the estimated VaR be inaccurate. Moreover, the leverage effect also causes the asymmetric effect of dynamic variance and shows the weakness of the GARCH models due to its symmetrical effect on conditional variance. Asymmetric GJR-GARCH models are used to filter the margins while the Copulas are used to link them together into a multivariate distribution. Then, we use copulas to construct flexible multivariate distributions with different marginal and dependence structure, which is led to portfolio joint distribution does not depend on the assumptions of normality and linear correlation. VaR obtained by the analysis with confidence level 95% is 0.005586. This VaR derived from the best Copula model, t-student Copula with marginal distribution of t distribution.

  3. Age, Dose, and Time-Dependency of Plasma and Tissue Distribution of Deltamethrine in Immature Rats

    EPA Science Inventory

    The major objective of this project was to characterize the systemic disposition of the pyrethroid, deltamethrin (DLT), in immature rats, with emphasis on the age-dependence of target organ (brain) dosimetry. Postnatal day (PND) 10, 21, and 40 male Sprague-Dawley rats received 0...

  4. On the synchrotron emission in kinetic simulations of runaway electrons in magnetic confinement fusion plasmas

    NASA Astrophysics Data System (ADS)

    Carbajal, L.; del-Castillo-Negrete, D.

    2017-12-01

    Developing avoidance or mitigation strategies of runaway electrons (REs) in magnetic confinement fusion (MCF) plasmas is of crucial importance for the safe operation of ITER. In order to develop these strategies, an accurate diagnostic capability that allows good estimates of the RE distribution function in these plasmas is needed. Synchrotron radiation (SR) of RE in MCF, besides of being one of the main damping mechanisms for RE in the high energy relativistic regime, is routinely used in current MCF experiments to infer the parameters of RE energy and pitch angle distribution functions. In the present paper we address the long standing question about what are the relationships between different REs distribution functions and their corresponding synchrotron emission simultaneously including: full-orbit effects, information of the spectral and angular distribution of SR of each electron, and basic geometric optics of a camera. We study the spatial distribution of the SR on the poloidal plane, and the statistical properties of the expected value of the synchrotron spectra of REs. We observe a strong dependence of the synchrotron emission measured by the camera on the pitch angle distribution of runaways, namely we find that crescent shapes of the spatial distribution of the SR as measured by the camera relate to RE distributions with small pitch angles, while ellipse shapes relate to distributions of runaways with larger the pitch angles. A weak dependence of the synchrotron emission measured by the camera with the RE energy, value of the q-profile at the edge, and the chosen range of wavelengths is observed. Furthermore, we find that oversimplifying the angular dependence of the SR changes the shape of the synchrotron spectra, and overestimates its amplitude by approximately 20 times for avalanching runaways and by approximately 60 times for mono-energetic distributions of runaways1.

  5. Enterprise Risk Management: The Way Ahead for DRDC within the DND Enterprise

    DTIC Science & Technology

    2010-03-01

    Taleb Distributions, the Hurst Exponent (to deal with long time events), Life Extinction Events, Zero-Infinity Dilemmas (which characterize the...Time dependent Hurst exponent in financial time series”, Physica A 344 (2004) 267-271 35. Yoav Ben-Shlomo and Diana Koh, “A Life Course Approach to

  6. Frequency distributions from birth, death, and creation processes.

    PubMed

    Bartley, David L; Ogden, Trevor; Song, Ruiguang

    2002-01-01

    The time-dependent frequency distribution of groups of individuals versus group size was investigated within a continuum approximation, assuming a simplified individual growth, death and creation model. The analogy of the system to a physical fluid exhibiting both convection and diffusion was exploited in obtaining various solutions to the distribution equation. A general solution was approximated through the application of a Green's function. More specific exact solutions were also found to be useful. The solutions were continually checked against the continuum approximation through extensive simulation of the discrete system. Over limited ranges of group size, the frequency distributions were shown to closely exhibit a power-law dependence on group size, as found in many realizations of this type of system, ranging from colonies of mutated bacteria to the distribution of surnames in a given population. As an example, the modeled distributions were successfully fit to the distribution of surnames in several countries by adjusting the parameters specifying growth, death and creation rates.

  7. Astrophysical uncertainties on the local dark matter distribution and direct detection experiments

    NASA Astrophysics Data System (ADS)

    Green, Anne M.

    2017-08-01

    The differential event rate in weakly interacting massive particle (WIMP) direct detection experiments depends on the local dark matter density and velocity distribution. Accurate modelling of the local dark matter distribution is therefore required to obtain reliable constraints on the WIMP particle physics properties. Data analyses typically use a simple standard halo model which might not be a good approximation to the real Milky Way (MW) halo. We review observational determinations of the local dark matter density, circular speed and escape speed and also studies of the local dark matter distribution in simulated MW-like galaxies. We discuss the effects of the uncertainties in these quantities on the energy spectrum and its time and direction dependence. Finally, we conclude with an overview of various methods for handling these astrophysical uncertainties.

  8. On a two-phase Hele-Shaw problem with a time-dependent gap and distributions of sinks and sources

    NASA Astrophysics Data System (ADS)

    Savina, Tatiana; Akinyemi, Lanre; Savin, Avital

    2018-01-01

    A two-phase Hele-Shaw problem with a time-dependent gap describes the evolution of the interface, which separates two fluids sandwiched between two plates. The fluids have different viscosities. In addition to the change in the gap width of the Hele-Shaw cell, the interface is driven by the presence of some special distributions of sinks and sources located in both the interior and exterior domains. The effect of surface tension is neglected. Using the Schwarz function approach, we give examples of exact solutions when the interface belongs to a certain family of algebraic curves and the curves do not form cusps. The family of curves are defined by the initial shape of the free boundary.

  9. Modeling solar wind with boundary conditions from interplanetary scintillations

    DOE PAGES

    Manoharan, P.; Kim, T.; Pogorelov, N. V.; ...

    2015-09-30

    Interplanetary scintillations make it possible to create three-dimensional, time- dependent distributions of the solar wind velocity. Combined with the magnetic field observations in the solar photosphere, they help perform solar wind simulations in a genuinely time-dependent way. Interplanetary scintillation measurements from the Ooty Radio Astronomical Observatory in India provide directions to multiple stars and may assure better resolution of transient processes in the solar wind. In this paper, we present velocity distributions derived from Ooty observations and compare them with those obtained with the Wang-Sheeley-Arge (WSA) model. We also present our simulations of the solar wind flow from 0.1 AUmore » to 1 AU with the boundary conditions based on both Ooty and WSA data.« less

  10. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  11. Power-law Exponent in Multiplicative Langevin Equation with Temporally Correlated Noise

    NASA Astrophysics Data System (ADS)

    Morita, Satoru

    2018-05-01

    Power-law distributions are ubiquitous in nature. Random multiplicative processes are a basic model for the generation of power-law distributions. For discrete-time systems, the power-law exponent is known to decrease as the autocorrelation time of the multiplier increases. However, for continuous-time systems, it is not yet clear how the temporal correlation affects the power-law behavior. Herein, we analytically investigated a multiplicative Langevin equation with colored noise. We show that the power-law exponent depends on the details of the multiplicative noise, in contrast to the case of discrete-time systems.

  12. Light propagation from fluorescent probes in biological tissues by coupled time-dependent parabolic simplified spherical harmonics equations

    PubMed Central

    Domínguez, Jorge Bouza; Bérubé-Lauzière, Yves

    2011-01-01

    We introduce a system of coupled time-dependent parabolic simplified spherical harmonic equations to model the propagation of both excitation and fluorescence light in biological tissues. We resort to a finite element approach to obtain the time-dependent profile of the excitation and the fluorescence light fields in the medium. We present results for cases involving two geometries in three-dimensions: a homogeneous cylinder with an embedded fluorescent inclusion and a realistically-shaped rodent with an embedded inclusion alike an organ filled with a fluorescent probe. For the cylindrical geometry, we show the differences in the time-dependent fluorescence response for a point-like, a spherical, and a spherically Gaussian distributed fluorescent inclusion. From our results, we conclude that the model is able to describe the time-dependent excitation and fluorescent light transfer in small geometries with high absorption coefficients and in nondiffusive domains, as may be found in small animal diffuse optical tomography (DOT) and fluorescence DOT imaging. PMID:21483606

  13. Scaling and clustering effects of extreme precipitation distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Li, Jianfeng

    2012-08-01

    SummaryOne of the impacts of climate change and human activities on the hydrological cycle is the change in the precipitation structure. Closely related to the precipitation structure are two characteristics: the volume (m) of wet periods (WPs) and the time interval between WPs or waiting time (t). Using daily precipitation data for a period of 1960-2005 from 590 rain gauge stations in China, these two characteristics are analyzed, involving scaling and clustering of precipitation episodes. Our findings indicate that m and t follow similar probability distribution curves, implying that precipitation processes are controlled by similar underlying thermo-dynamics. Analysis of conditional probability distributions shows a significant dependence of m and t on their previous values of similar volumes, and the dependence tends to be stronger when m is larger or t is longer. It indicates that a higher probability can be expected when high-intensity precipitation is followed by precipitation episodes with similar precipitation intensity and longer waiting time between WPs is followed by the waiting time of similar duration. This result indicates the clustering of extreme precipitation episodes and severe droughts or floods are apt to occur in groups.

  14. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  15. Bayesian analysis of the kinetics of quantal transmitter secretion at the neuromuscular junction.

    PubMed

    Saveliev, Anatoly; Khuzakhmetova, Venera; Samigullin, Dmitry; Skorinkin, Andrey; Kovyazina, Irina; Nikolsky, Eugeny; Bukharaeva, Ellya

    2015-10-01

    The timing of transmitter release from nerve endings is considered nowadays as one of the factors determining the plasticity and efficacy of synaptic transmission. In the neuromuscular junction, the moments of release of individual acetylcholine quanta are related to the synaptic delays of uniquantal endplate currents recorded under conditions of lowered extracellular calcium. Using Bayesian modelling, we performed a statistical analysis of synaptic delays in mouse neuromuscular junction with different patterns of rhythmic nerve stimulation and when the entry of calcium ions into the nerve terminal was modified. We have obtained a statistical model of the release timing which is represented as the summation of two independent statistical distributions. The first of these is the exponentially modified Gaussian distribution. The mixture of normal and exponential components in this distribution can be interpreted as a two-stage mechanism of early and late periods of phasic synchronous secretion. The parameters of this distribution depend on both the stimulation frequency of the motor nerve and the calcium ions' entry conditions. The second distribution was modelled as quasi-uniform, with parameters independent of nerve stimulation frequency and calcium entry. Two different probability density functions for the distribution of synaptic delays suggest at least two independent processes controlling the time course of secretion, one of them potentially involving two stages. The relative contribution of these processes to the total number of mediator quanta released depends differently on the motor nerve stimulation pattern and on calcium ion entry into nerve endings.

  16. Distributed synchronization of networked drive-response systems: A nonlinear fixed-time protocol.

    PubMed

    Zhao, Wen; Liu, Gang; Ma, Xi; He, Bing; Dong, Yunfeng

    2017-11-01

    The distributed synchronization of networked drive-response systems is investigated in this paper. A novel nonlinear protocol is proposed to ensure that the tracking errors converge to zeros in a fixed-time. By comparison with previous synchronization methods, the present method considers more practical conditions and the synchronization time is not dependent of arbitrary initial conditions but can be offline pre-assign according to the task assignment. Finally, the feasibility and validity of the presented protocol have been illustrated by a numerical simulation. Copyright © 2017. Published by Elsevier Ltd.

  17. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  18. X-ray variability of Pleiades late-type stars as observed with the ROSAT-PSPC

    NASA Astrophysics Data System (ADS)

    Marino, A.; Micela, G.; Peres, G.; Sciortino, S.

    2003-08-01

    We present a comprehensive analysis of X-ray variability of the late-type (dF7-dM) Pleiades stars, detected in all ROSAT-PSPC observations; X-ray variations on short (hours) and medium (months) time scales have been explored. We have grouped the stars in two samples: 89 observations of 42 distinct dF7-dK2 stars and 108 observations of 61 dK3-dM stars. The Kolmogorov-Smirnov test applied on all X-ray photon time series show that the percentage of cases of significant variability is quite similar on both samples, suggesting that the presence of variability does not depend on mass for the time scales and mass range explored. The comparison between the Time X-ray Amplitude Distribution functions (XAD) of the set of dF7-dK2 and of the dK3-dM show that, on short time scales, dK3-dM stars show larger variations than dF7-dK2. A subsample of eleven dF7-dK2 and eleven dK3-dM Pleiades stars allows the study of variability on longer time scales: we found that variability on medium - long time scales is relatively more common among dF7-dK2 stars than among dK3-dM ones. For both dF7-dK2 Pleiades stars and dF7-dK2 field stars, the variability on short time scales depends on Lx while this dependence has not been observed among dK3-dM stars. It may be that the variability among dK3-dM stars is dominated by flares that have a similar luminosity distribution for stars of different Lx, while flaring distribution in dF7-dK2 stars may depend on X-ray luminosity. The lowest mass stars show significant rapid variability (flares?) and no evidence of rotation modulation or cycles. On the contrary, dF7-dK2 Pleiades stars show both rapid variability and variations on longer time scales, likely associated with rotational modulation or cycles.

  19. Coherent distributions for the rigid rotator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigorescu, Marius

    2016-06-15

    Coherent solutions of the classical Liouville equation for the rigid rotator are presented as positive phase-space distributions localized on the Lagrangian submanifolds of Hamilton-Jacobi theory. These solutions become Wigner-type quasiprobability distributions by a formal discretization of the left-invariant vector fields from their Fourier transform in angular momentum. The results are consistent with the usual quantization of the anisotropic rotator, but the expected value of the Hamiltonian contains a finite “zero point” energy term. It is shown that during the time when a quasiprobability distribution evolves according to the Liouville equation, the related quantum wave function should satisfy the time-dependent Schrödingermore » equation.« less

  20. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  1. ON CONTINUOUS-REVIEW (S-1,S) INVENTORY POLICIES WITH STATE-DEPENDENT LEADTIMES,

    DTIC Science & Technology

    INVENTORY CONTROL, *REPLACEMENT THEORY), MATHEMATICAL MODELS, LEAD TIME , MANAGEMENT ENGINEERING, DISTRIBUTION FUNCTIONS, PROBABILITY, QUEUEING THEORY, COSTS, OPTIMIZATION, STATISTICAL PROCESSES, DIFFERENCE EQUATIONS

  2. Comparison of a hybrid medication distribution system to simulated decentralized distribution models.

    PubMed

    Gray, John P; Ludwig, Brad; Temple, Jack; Melby, Michael; Rough, Steve

    2013-08-01

    The results of a study to estimate the human resource and cost implications of changing the medication distribution model at a large medical center are presented. A two-part study was conducted to evaluate alternatives to the hospital's existing hybrid distribution model (64% of doses dispensed via cart fill and 36% via automated dispensing cabinets [ADCs]). An assessment of nurse, pharmacist, and pharmacy technician workloads within the hybrid system was performed through direct observation, with time standards calculated for each dispensing task; similar time studies were conducted at a comparator hospital with a decentralized medication distribution system involving greater use of ADCs. The time study data were then used in simulation modeling of alternative distribution scenarios: one involving no use of cart fill, one involving no use of ADCs, and one heavily dependent on ADC dispensing (89% via ADC and 11% via cart fill). Simulation of the base-case and alternative scenarios indicated that as the modeled percentage of doses dispensed from ADCs rose, the calculated pharmacy technician labor requirements decreased, with a proportionately greater increase in the nursing staff workload. Given that nurses are a higher-cost resource than pharmacy technicians, the projected human resource opportunity cost of transitioning from the hybrid system to a decentralized system similar to the comparator facility's was estimated at $229,691 per annum. Based on the simulation results, it was decided that a transition from the existing hybrid medication distribution system to a more ADC-dependent model would result in an unfavorable shift in staff skill mix and corresponding human resource costs at the medical center.

  3. Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution

    ERIC Educational Resources Information Center

    Verkuilen, Jay; Smithson, Michael

    2012-01-01

    Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…

  4. Quasar Astrophysics with the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen; Wehrle, Ann; Meier, David; Jones, Dayton; Piner, Glenn

    2007-01-01

    Optical astrometry of quasars and active galaxies can provide key information on the spatial distribution and variability of emission in compact nuclei. The Space Interferometry Mission (SIM PlanetQuest) will have the sensitivity to measure a significant number of quasar positions at the microarcsecond level. SIM will be very sensitive to astrometric shifts for objects as faint as V = 19. A variety of AGN phenomena are expected to be visible to SIM on these scales, including time and spectral dependence in position offsets between accretion disk and jet emission. These represent unique data on the spatial distribution and time dependence of quasar emission. It will also probe the use of quasar nuclei as fundamental astrometric references. Comparisons between the time-dependent optical photocenter position and VLBI radio images will provide further insight into the jet emission mechanism. Observations will be tailored to each specific target and science question. SIM will be able to distinguish spatially between jet and accretion disk emission; and it can observe the cores of galaxies potentially harboring binary supermassive black holes resulting from mergers.

  5. Path-integral formalism for stochastic resetting: Exactly solved examples and shortcuts to confinement

    NASA Astrophysics Data System (ADS)

    Roldán, Édgar; Gupta, Shamik

    2017-08-01

    We study the dynamics of overdamped Brownian particles diffusing in conservative force fields and undergoing stochastic resetting to a given location at a generic space-dependent rate of resetting. We present a systematic approach involving path integrals and elements of renewal theory that allows us to derive analytical expressions for a variety of statistics of the dynamics such as (i) the propagator prior to first reset, (ii) the distribution of the first-reset time, and (iii) the spatial distribution of the particle at long times. We apply our approach to several representative and hitherto unexplored examples of resetting dynamics. A particularly interesting example for which we find analytical expressions for the statistics of resetting is that of a Brownian particle trapped in a harmonic potential with a rate of resetting that depends on the instantaneous energy of the particle. We find that using energy-dependent resetting processes is more effective in achieving spatial confinement of Brownian particles on a faster time scale than performing quenches of parameters of the harmonic potential.

  6. Multielectron effects in the photoelectron momentum distribution of noble-gas atoms driven by visible-to-infrared-frequency laser pulses: A time-dependent density-functional-theory approach

    NASA Astrophysics Data System (ADS)

    Murakami, Mitsuko; Zhang, G. P.; Chu, Shih-I.

    2017-05-01

    We present the photoelectron momentum distributions (PMDs) of helium, neon, and argon atoms driven by a linearly polarized, visible (527-nm) or near-infrared (800-nm) laser pulse (20 optical cycles in duration) based on the time-dependent density-functional theory (TDDFT) under the local-density approximation with a self-interaction correction. A set of time-dependent Kohn-Sham equations for all electrons in an atom is numerically solved using the generalized pseudospectral method. An effect of the electron-electron interaction driven by a visible laser field is not recognizable in the helium and neon PMDs except for a reduction of the overall photoelectron yield, but there is a clear difference between the PMDs of an argon atom calculated with the frozen-core approximation and TDDFT, indicating an interference of its M -shell wave functions during the ionization. Furthermore, we find that the PMDs of degenerate p states are well separated in intensity when driven by a near-infrared laser field, so that the single-active-electron approximation can be adopted safely.

  7. [Unplanned extubation in ICU, and the relevance of non-dependent patient variables the quality of care].

    PubMed

    González-Castro, A; Peñasco, Y; Blanco, C; González-Fernández, C; Domínguez, M J; Rodríguez-Borregán, J C

    2014-01-01

    To evaluate, for a consecutive year, the magnitude of unplanned extubation, looking for non-dependent patient variables. Prospective, observational study of cases and controls in a mixed intensive care unit within in a tertiary hospital. Patients were considered cases with more than 24 hours who had an episode of unplanned extubation. Prospective collection of variables case as time of unplanned extubation (collection time), identification of the box where the patient was admitted, presence and type of physical restraint, development of ventilator-associated pneumonia (VAP) and death. There were 17 unplanned extubation in 15 patients, 1.21 unplanned extubation per 100 days of MV. The unplanned extubation had an inhomogeneous spatial distribution (number of boxes). The time distribution of cases compared with controls showed significant differences in time distribution (P=.02). The comparative analysis between cases and controls, showed increased mortality, increased length of ICU stay, longer hospital stay and increased risk for VAP when patients suffer an episode of unplanned extubation. Unplanned extubation occurs most frequently in a given time slot of the day, may play a role in the spatial location of the patient; occurs most often in patients who are in the process of weaning from mechanical ventilation, and develop greater VAP. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.

  8. Inverse statistics in the foreign exchange market

    NASA Astrophysics Data System (ADS)

    Jensen, M. H.; Johansen, A.; Petroni, F.; Simonsen, I.

    2004-09-01

    We investigate intra-day foreign exchange (FX) time series using the inverse statistic analysis developed by Simonsen et al. (Eur. Phys. J. 27 (2002) 583) and Jensen et al. (Physica A 324 (2003) 338). Specifically, we study the time-averaged distributions of waiting times needed to obtain a certain increase (decrease) ρ in the price of an investment. The analysis is performed for the Deutsch Mark (DM) against the US for the full year of 1998, but similar results are obtained for the Japanese Yen against the US. With high statistical significance, the presence of “resonance peaks” in the waiting time distributions is established. Such peaks are a consequence of the trading habits of the market participants as they are not present in the corresponding tick (business) waiting time distributions. Furthermore, a new stylized fact, is observed for the (normalized) waiting time distribution in the form of a power law Pdf. This result is achieved by rescaling of the physical waiting time by the corresponding tick time thereby partially removing scale-dependent features of the market activity.

  9. Direction dependence of displacement time for two-fluid electroosmotic flow.

    PubMed

    Lim, Chun Yee; Lam, Yee Cheong

    2012-03-01

    Electroosmotic flow that involves one fluid displacing another fluid is commonly encountered in various microfludic applications and experiments, for example, current monitoring technique to determine zeta potential of microchannel. There is experimentally observed anomaly in such flow, namely, the displacement time is flow direction dependent, i.e., it depends if it is a high concentration fluid displacing a low concentration fluid, or vice versa. Thus, this investigation focuses on the displacement flow of two fluids with various concentration differences. The displacement time was determined experimentally with current monitoring method. It is concluded that the time required for a high concentration solution to displace a low concentration solution is smaller than the time required for a low concentration solution to displace a high concentration solution. The percentage displacement time difference increases with increasing concentration difference and independent of the length or width of the channel and the voltage applied. Hitherto, no theoretical analysis or numerical simulation has been conducted to explain this phenomenon. A numerical model based on finite element method was developed to explain the experimental observations. Simulations showed that the velocity profile and ion distribution deviate significantly from a single fluid electroosmotic flow. The distortion of ion distribution near the electrical double layer is responsible for the displacement time difference for the two different flow directions. The trends obtained from simulations agree with the experimental findings.

  10. Direction dependence of displacement time for two-fluid electroosmotic flow

    PubMed Central

    Lim, Chun Yee; Lam, Yee Cheong

    2012-01-01

    Electroosmotic flow that involves one fluid displacing another fluid is commonly encountered in various microfludic applications and experiments, for example, current monitoring technique to determine zeta potential of microchannel. There is experimentally observed anomaly in such flow, namely, the displacement time is flow direction dependent, i.e., it depends if it is a high concentration fluid displacing a low concentration fluid, or vice versa. Thus, this investigation focuses on the displacement flow of two fluids with various concentration differences. The displacement time was determined experimentally with current monitoring method. It is concluded that the time required for a high concentration solution to displace a low concentration solution is smaller than the time required for a low concentration solution to displace a high concentration solution. The percentage displacement time difference increases with increasing concentration difference and independent of the length or width of the channel and the voltage applied. Hitherto, no theoretical analysis or numerical simulation has been conducted to explain this phenomenon. A numerical model based on finite element method was developed to explain the experimental observations. Simulations showed that the velocity profile and ion distribution deviate significantly from a single fluid electroosmotic flow. The distortion of ion distribution near the electrical double layer is responsible for the displacement time difference for the two different flow directions. The trends obtained from simulations agree with the experimental findings. PMID:22662083

  11. An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging

    NASA Astrophysics Data System (ADS)

    Santhi, G.; Karthikeyan, K.

    2017-11-01

    In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.

  12. WE-AB-204-07: Spatiotemporal Distribution of the FDG PET Tracer in Solid Tumors: Contributions of Diffusion and Convection Mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltani, M; Sefidgar, M; Bazmara, H

    2015-06-15

    Purpose: In this study, a mathematical model is utilized to simulate FDG distribution in tumor tissue. In contrast to conventional compartmental modeling, tracer distributions across space and time are directly linked together (i.e. moving beyond ordinary differential equations (ODEs) to utilizing partial differential equations (PDEs) coupling space and time). The diffusion and convection transport mechanisms are both incorporated to model tracer distribution. We aimed to investigate the contributions of these two mechanisms on FDG distribution for various tumor geometries obtained from PET/CT images. Methods: FDG transport was simulated via a spatiotemporal distribution model (SDM). The model is based on amore » 5K compartmental model. We model the fact that tracer concentration in the second compartment (extracellular space) is modulated via convection and diffusion. Data from n=45 patients with pancreatic tumors as imaged using clinical FDG PET/CT imaging were analyzed, and geometrical information from the tumors including size, shape, and aspect ratios were classified. Tumors with varying shapes and sizes were assessed in order to investigate the effects of convection and diffusion mechanisms on FDG transport. Numerical methods simulating interstitial flow and solute transport in tissue were utilized. Results: We have shown the convection mechanism to depend on the shape and size of tumors whereas diffusion mechanism is seen to exhibit low dependency on shape and size. Results show that concentration distribution of FDG is relatively similar for the considered tumors; and that the diffusion mechanism of FDG transport significantly dominates the convection mechanism. The Peclet number which shows the ratio of convection to diffusion rates was shown to be of the order of 10−{sup 3} for all considered tumors. Conclusion: We have demonstrated that even though convection leads to varying tracer distribution profiles depending on tumor shape and size, the domination of the diffusion phenomenon prevents these factors from modulating FDG distribution.« less

  13. Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.

  14. Time-dependent diffusive acceleration of test particles at shocks

    NASA Astrophysics Data System (ADS)

    Drury, L. O'C.

    1991-07-01

    A theoretical description is developed for the acceleration of test particles at a steady plane nonrelativistic shock. The mean and the variance of the acceleration-time distribution are expressed analytically for the condition under which the diffusion coefficient is arbitrarily dependent on position and momentum. The formula for an acceleration rate with arbitrary spatial variation in the diffusion coefficient developed by Drury (1987) is supplemented by a general theory of time dependence. An approximation scheme is developed by means of the analysis which permits the description of the spectral cutoff resulting from the finite shock age. The formulas developed in the analysis are also of interest for analyzing the observations of heliospheric shocks made from spacecraft.

  15. Experimental and numerical investigations of heat transfer and thermal efficiency of an infrared gas stove

    NASA Astrophysics Data System (ADS)

    Charoenlerdchanya, A.; Rattanadecho, P.; Keangin, P.

    2018-01-01

    An infrared gas stove is a low-pressure gas stove type and it has higher thermal efficiency than the other domestic cooking stoves. This study considers the computationally determine water and air temperature distributions, water and air velocity distributions and thermal efficiency of the infrared gas stove. The goal of this work is to investigate the effect of various pot diameters i.e. 220 mm, 240 mm and 260 mm on the water and air temperature distributions, water and air velocity distributions and thermal efficiency of the infrared gas stove. The time-dependent heat transfer equation involving diffusion and convection coupled with the time-dependent fluid dynamic equation is implemented and is solved by using the finite element method (FEM). The computer simulation study is validated with an experimental study, which is use standard experiment by LPG test for low-pressure gas stove in households (TIS No. 2312-2549). The findings revealed that the water and air temperature distributions increase with greater heating time, which varies with the three different pot diameters (220 mm, 240 mm and 260 mm). Similarly, the greater heating time, the water and air velocity distributions increase that vary by pot diameters (220, 240 and 260 mm). The maximum water temperature in the case of pot diameter of 220 mm is higher than the maximum water velocity in the case of pot diameters of 240 mm and 260 mm, respectively. However, the maximum air temperature in the case of pot diameter of 260 mm is higher than the maximum water velocity in the case of pot diameters of 240 mm and 220 mm, respectively. The obtained results may provide a basis for improving the energy efficiency of infrared gas stoves and other equipment, including helping to reduce energy consumption.

  16. Time Dependent Density Functional Theory Calculations of Large Compact PAH Cations: Implications for the Diffuse Interstellar Bands

    NASA Technical Reports Server (NTRS)

    Weisman, Jennifer L.; Lee, Timothy J.; Salama, Farid; Gordon-Head, Martin; Kwak, Dochan (Technical Monitor)

    2002-01-01

    We investigate the electronic absorption spectra of several maximally pericondensed polycyclic aromatic hydrocarbon radical cations with time dependent density functional theory calculations. We find interesting trends in the vertical excitation energies and oscillator strengths for this series containing pyrene through circumcoronene, the largest species containing more than 50 carbon atoms. We discuss the implications of these new results for the size and structure distribution of the diffuse interstellar band carriers.

  17. A composite likelihood approach for spatially correlated survival data

    PubMed Central

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory. PMID:24223450

  18. A composite likelihood approach for spatially correlated survival data.

    PubMed

    Paik, Jane; Ying, Zhiliang

    2013-01-01

    The aim of this paper is to provide a composite likelihood approach to handle spatially correlated survival data using pairwise joint distributions. With e-commerce data, a recent question of interest in marketing research has been to describe spatially clustered purchasing behavior and to assess whether geographic distance is the appropriate metric to describe purchasing dependence. We present a model for the dependence structure of time-to-event data subject to spatial dependence to characterize purchasing behavior from the motivating example from e-commerce data. We assume the Farlie-Gumbel-Morgenstern (FGM) distribution and then model the dependence parameter as a function of geographic and demographic pairwise distances. For estimation of the dependence parameters, we present pairwise composite likelihood equations. We prove that the resulting estimators exhibit key properties of consistency and asymptotic normality under certain regularity conditions in the increasing-domain framework of spatial asymptotic theory.

  19. THE SEARCH DYNAMICS OF RECRUITED HONEY BEES, APIS MELLIFERA LIGUSTICA SPINOLA.

    PubMed

    Friesen, Larry Jon

    1973-02-01

    Some variables in the recruitment process of honey bees were studied as they affected the distribution and success of the searching population in the field. The dance language and odor dependence hypotheses were contrasted and their predictions compared with the following observations. 1. Recruits were attracted to the odors from the food which were carried by foragers and were dependent on these odors for success. 2. A monitoring of recruit densities in the field demonstrated an association of searchers with the forager flight path. 3. The degree of correspondence between the distribution of recruits and the direction of the flight path to the feeding site was correlated with wind direction, not search efficiency. 4. Feeding stations upwind of the hive provided the highest recruit success rates, shortest search times, and the least dependence on wind speed. Downwind stations provided the lowest recruit success rates, the longest search times, and the greatest dependence on wind speed. 5. A disproportionate increase in recruit success with an increase in the number of foragers visiting a feeding site was correlated with the density of the foragers in the field. 6. Increased bee densities at the feeding site, even with bees from different hives, increased recruit success and shortened search times. 7. The progression of and the extremely long intervals between the onset of recruit arrivals at areas along the forager flight path suggested communication among bees in the field and a dependence of recruit success on the density and growth of the searching population. These observations are compatible with an odor dependent search behavior and together fail to support the predictions of the dance language hypothesis. Dance attendants appeared to have been conditioned to the odors associated with returning foragers and, after leaving the hive, entered a searching population dependent on these odors for success. The dependence of recruit success on food odor at the feeding station, the density of foragers between this station and the hive, and the direction of the wind indicates that the integrity of the forager flight path was extremely important to this success. The distributions and extended search times of recruits indicated a search behavior based on positive anemotaxis during the perception of the proper combination of odors and negative anemotaxis after the loss of this stimulation.

  20. Random walk to a nonergodic equilibrium concept

    NASA Astrophysics Data System (ADS)

    Bel, G.; Barkai, E.

    2006-01-01

    Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.

  1. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  2. A Nonparametric Approach For Representing Interannual Dependence In Monthly Streamflow Sequences

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Oneill, R.

    The estimation of risks associated with water management plans requires generation of synthetic streamflow sequences. The mathematical algorithms used to generate these sequences at monthly time scales are found lacking in two main respects: inability in preserving dependence attributes particularly at large (seasonal to interannual) time lags; and, a poor representation of observed distributional characteristics, in partic- ular, representation of strong assymetry or multimodality in the probability density function. Proposed here is an alternative that naturally incorporates both observed de- pendence and distributional attributes in the generated sequences. Use of a nonpara- metric framework provides an effective means for representing the observed proba- bility distribution, while the use of a Svariable kernelT ensures accurate modeling of & cedil;streamflow data sets that contain a substantial number of zero flow values. A careful selection of prior flows imparts the appropriate short-term memory, while use of an SaggregateT flow variable allows representation of interannual dependence. The non- & cedil;parametric simulation model is applied to monthly flows from the Beaver River near Beaver, Utah, USA, and the Burrendong dam inflows, New South Wales, Australia. Results indicate that while the use of traditional simulation approaches leads to an inaccurate representation of dependence at long (annual and interannual) time scales, the proposed model can simulate both short and long-term dependence. As a result, the proposed model ensures a significantly improved representation of reservoir storage statistics, particularly for systems influenced by long droughts. It is important to note that the proposed method offers a simpler and better alternative to conventional dis- aggregation models as: (a) a separate annual flow series is not required, (b) stringent assumptions relating annual and monthly flows are not needed, and (c) the method does not require the specification of a "water year", instead ensuring that the sum of any sequence of flows lasting twelve months will result in the type of dependence that is observed in the historical annual flow series.

  3. Geometrical effects on the electron residence time in semiconductor nano-particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koochi, Hakimeh; Ebrahimi, Fatemeh, E-mail: f-ebrahimi@birjand.ac.ir; Solar Energy Research Group, University of Birjand, Birjand

    2014-09-07

    We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ{sub r} in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r{sup 2} model) or through the whole particle (r{sup 3} model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW)more » simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ{sub r}. It has been observed that by increasing the coordination number n, the average value of electron residence time, τ{sup ¯}{sub r} rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ{sup ¯}{sub r} is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ{sup ¯}{sub r}. Our simulations indicate that for volume distribution of traps, τ{sup ¯}{sub r} scales as d{sup 2}. For a surface distribution of traps τ{sup ¯}{sub r} increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.« less

  4. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  5. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  6. Ionospheric hot spot at high latitudes

    NASA Technical Reports Server (NTRS)

    Schunk, R. W.; Sojka, J. J.

    1982-01-01

    Schunk and Raitt (1980) and Sojka et al. (1981) have developed a model of the convecting high-latitude ionosphere in order to determine the extent to which various chemical and transport processes affect the ion composition and electron density at F-region altitudes. The numerical model produces time-dependent, three-dimensional ion density distributions for the ions NO(+), O2(+), N2(+), O(+), N(+), and He(+). Recently, the high-latitude ionospheric model has been improved by including thermal conduction and diffusion-thermal heat flow terms. Schunk and Sojka (1982) have studied the ion temperature variations in the daytime high-latitude F-region. In the present study, a time-dependent three-dimensional ion temperature distribution is obtained for the high-latitude ionosphere for an asymmetric convection electric field pattern with enhanced flow in the dusk sector of the polar region. It is shown that such a convection pattern produces a hot spot in the ion temperature distribution which coincides with the location of the strong convection cell.

  7. Stochastic nature of series of waiting times.

    PubMed

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H; Salehi, E; Behjat, E; Qorbani, M; Nezhad, M Khazaei; Zirak, M; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the "waiting times" series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  8. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yiran; Liu, Siming; Yuan, Qiang, E-mail: liusm@pmo.ac.cn

    Recent precise measurements of cosmic-ray (CR) spectra show that the energy distribution of protons is softer than those of heavier nuclei, and there are spectral hardenings for all nuclear compositions above ∼200 GV. Models proposed for these anomalies generally assume steady-state solutions of the particle acceleration process. We show that if the diffusion coefficient has a weak dependence on the particle rigidity near shock fronts of supernova remnants (SNRs), time-dependent solutions of the linear diffusive shock acceleration at two stages of SNR evolution can naturally account for these anomalies. The high-energy component of CRs is dominated by acceleration in themore » free expansion and adiabatic phases with enriched heavy elements and a high shock speed. The low-energy component may be attributed to acceleration by slow shocks propagating in dense molecular clouds with low metallicity in the radiative phase. Instead of a single power-law distribution, the spectra of time-dependent solutions soften gradually with the increase of energy, which may be responsible for the “knee” of CRs.« less

  10. A mathematical model for the occurrence of historical events

    NASA Astrophysics Data System (ADS)

    Ohnishi, Teruaki

    2017-12-01

    A mathematical model was proposed for the frequency distribution of historical inter-event time τ. A basic ingredient was constructed by assuming the significance of a newly occurring historical event depending on the magnitude of a preceding event, the decrease of its significance by oblivion during the successive events, and an independent Poisson process for the occurrence of the event. The frequency distribution of τ was derived by integrating the basic ingredient with respect to all social fields and to all stake holders. The function of such a distribution was revealed as the forms of an exponential type, a power law type or an exponential-with-a-tail type depending on the values of constants appearing in the ingredient. The validity of this model was studied by applying it to the two cases of Modern China and Northern Ireland Troubles, where the τ-distribution varies depending on the different countries interacting with China and on the different stage of history of the Troubles, respectively. This indicates that history is consisted from many components with such different types of τ-distribution, which are the similar situation to the cases of other general human activities.

  11. Scaling behavior of sleep-wake transitions across species

    NASA Astrophysics Data System (ADS)

    Lo, Chung-Chuan; Chou, Thomas; Ivanov, Plamen Ch.; Penzel, Thomas; Mochizuki, Takatoshi; Scammell, Thomas; Saper, Clifford B.; Stanley, H. Eugene

    2003-03-01

    Uncovering the mechanisms controlling sleep is a fascinating scientific challenge. It can be viewed as transitions of states of a very complex system, the brain. We study the time dynamics of short awakenings during sleep for three species: humans, rats and mice. We find, for all three species, that wake durations follow a power-law distribution, and sleep durations follow exponential distributions. Surprisingly, all three species have the same power-law exponent for the distribution of wake durations, but the exponential time scale of the distributions of sleep durations varies across species. We suggest that the dynamics of short awakenings are related to species-independent fluctuations of the system, while the dynamics of sleep is related to system-dependent mechanisms which change with species.

  12. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. 47 CFR 74.1263 - Time of operation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1263 Time of operation. (a) The licensee of an FM translator or... an FM translator or booster station is expected to provide a dependable service to the extent that...

  14. Time behavior of solar flare particles to 5 AU

    NASA Technical Reports Server (NTRS)

    Haffner, J. W.

    1972-01-01

    A simple model of solar flare radiation event particle transport is developed to permit the calculation of fluxes and related quantities as a function of distance from the sun (R). This model assumes the particles spiral around the solar magnetic field lines with a constant pitch angle. The particle angular distributions and onset plus arrival times as functions of energy at 1 AU agree with observations if the pitch angle distribution peaks near 90 deg. As a consequence the time dependence factor is essentially proportional to R/1.7, (R in AU), and the event flux is proportional to R/2.

  15. Solving the transient water age distribution problem in environmental flow systems

    NASA Astrophysics Data System (ADS)

    Cornaton, F. J.

    2011-12-01

    The temporal evolution of groundwater age and its frequency distributions can display important changes as flow regimes vary due to the natural change in climate and hydrologic conditions and/or to human induced pressures on the resource to satisfy the water demand. Groundwater age being nowadays frequently used to investigate reservoir properties and recharge conditions, special attention needs to be put on the way this property is characterized, would it be using isotopic methods, multiple tracer techniques, or mathematical modelling. Steady-state age frequency distributions can be modelled using standard numerical techniques, since the general balance equation describing age transport under steady-state flow conditions is exactly equivalent to a standard advection-dispersion equation. The time-dependent problem is however described by an extended transport operator that incorporates an additional coordinate for water age. The consequence is that numerical solutions can hardly be achieved, especially for real 3-D applications over large time periods of interest. The absence of any robust method has thus left us in the quantitative hydrogeology community dodging the issue of transience. Novel algorithms for solving the age distribution problem under time-varying flow regimes are presented and, for some specific configurations, extended to the problem of generalized component exposure time. The solution strategy is based on the combination of the Laplace Transform technique applied to the age (or exposure time) coordinate with standard time-marching schemes. The method is well-suited for groundwater problems with possible density-dependency of fluid flow (e.g. coupled flow and heat/salt concentration problems), but also presents significance to the homogeneous flow (compressible case) problem. The approach is validated using 1-D analytical solutions and exercised on some demonstration problems that are relevant to topical issues in groundwater age, including analysis of transfer times in the vadose zone, aquifer-aquitard interactions and the induction of transient age distributions when a well pump is started.

  16. Mass Dependence of the HBT Radii Observed in e+e- Annihilation

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Zalewski, K.

    1999-02-01

    It is shown that the recently established strong mass-dependence of the radii of the hadron sources, as observed in HBT analyses of the e+e- annihilation, can be explained by assuming a generalized inside--outside cascade, i.e. that (i) the four-momenta and the space-time position four-vectors of the produced particles are approximately proportional to each other and (ii) the ``freeze-out'' times are distributed along the hyperbola t2-z2= τ02.

  17. Rank-dependent deactivation in network evolution.

    PubMed

    Xu, Xin-Jian; Zhou, Ming-Chen

    2009-12-01

    A rank-dependent deactivation mechanism is introduced to network evolution. The growth dynamics of the network is based on a finite memory of individuals, which is implemented by deactivating one site at each time step. The model shows striking features of a wide range of real-world networks: power-law degree distribution, high clustering coefficient, and disassortative degree correlation.

  18. Stochastic nature of series of waiting times

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  19. Study of In-Trap Ion Clouds by Ion Trajectory Simulations.

    PubMed

    Zhou, Xiaoyu; Liu, Xinwei; Cao, Wenbo; Wang, Xiao; Li, Ming; Qiao, Haoxue; Ouyang, Zheng

    2018-02-01

    Gaussian distribution has been utilized to describe the global number density distribution of ion cloud in the Paul trap, which is known as the thermal equilibrium theory and widely used in theoretical modeling of ion clouds in the ion traps. Using ion trajectory simulations, however, the ion clouds can now also be treated as a dynamic ion flow field and the location-dependent features could now be characterized. This study was carried out to better understand the in-trap ion cloud properties, such as the local particle velocity and temperature. The local ion number densities were found to be heterogeneously distributed in terms of mean and distribution width; the velocity and temperature of the ion flow varied with pressure depending on the flow type of the neutral molecules; and the "quasi-static" equilibrium status can only be achieved after a certain number of collisions, for which the time period is pressure-dependent. This work provides new insights of the ion clouds that are globally stable but subjected to local rf heating and collisional cooling. Graphical Abstract ᅟ.

  20. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  1. A dynamic re-partitioning strategy based on the distribution of key in Spark

    NASA Astrophysics Data System (ADS)

    Zhang, Tianyu; Lian, Xin

    2018-05-01

    Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.

  2. Dependence of two-proton radioactivity on nuclear pairing models

    NASA Astrophysics Data System (ADS)

    Oishi, Tomohiro; Kortelainen, Markus; Pastore, Alessandro

    2017-10-01

    Sensitivity of two-proton emitting decay to nuclear pairing correlation is discussed within a time-dependent three-body model. We focus on the 6Be nucleus assuming α +p +p configuration, and its decay process is described as a time evolution of the three-body resonance state. For a proton-proton subsystem, a schematic density-dependent contact (SDDC) pairing model is employed. From the time-dependent calculation, we observed the exponential decay rule of a two-proton emission. It is shown that the density dependence does not play a major role in determining the decay width, which can be controlled only by the asymptotic strength of the pairing interaction. This asymptotic pairing sensitivity can be understood in terms of the dynamics of the wave function driven by the three-body Hamiltonian, by monitoring the time-dependent density distribution. With this simple SDDC pairing model, there remains an impossible trinity problem: it cannot simultaneously reproduce the empirical Q value, decay width, and the nucleon-nucleon scattering length. This problem suggests that a further sophistication of the theoretical pairing model is necessary, utilizing the two-proton radioactivity data as the reference quantities.

  3. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  4. An energy dependent earthquake frequency-magnitude distribution

    NASA Astrophysics Data System (ADS)

    Spassiani, I.; Marzocchi, W.

    2017-12-01

    The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.

  5. Theoretical analysis and simulations of the generalized Lotka-Volterra model.

    PubMed

    Malcai, Ofer; Biham, Ofer; Richmond, Peter; Solomon, Sorin

    2002-09-01

    The dynamics of generalized Lotka-Volterra systems is studied by theoretical techniques and computer simulations. These systems describe the time evolution of the wealth distribution of individuals in a society, as well as of the market values of firms in the stock market. The individual wealths or market values are given by a set of time dependent variables w(i), i=1,...,N. The equations include a stochastic autocatalytic term (representing investments), a drift term (representing social security payments), and a time dependent saturation term (due to the finite size of the economy). The w(i)'s turn out to exhibit a power-law distribution of the form P(w) approximately w(-1-alpha). It is shown analytically that the exponent alpha can be expressed as a function of one parameter, which is the ratio between the constant drift component (social security) and the fluctuating component (investments). This result provides a link between the lower and upper cutoffs of this distribution, namely, between the resources available to the poorest and those available to the richest in a given society. The value of alpha is found to be insensitive to variations in the saturation term, which represent the expansion or contraction of the economy. The results are of much relevance to empirical studies that show that the distribution of the individual wealth in different countries during different periods in the 20th century has followed a power-law distribution with 1

  6. Theoretical analysis and simulations of the generalized Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Malcai, Ofer; Biham, Ofer; Richmond, Peter; Solomon, Sorin

    2002-09-01

    The dynamics of generalized Lotka-Volterra systems is studied by theoretical techniques and computer simulations. These systems describe the time evolution of the wealth distribution of individuals in a society, as well as of the market values of firms in the stock market. The individual wealths or market values are given by a set of time dependent variables wi, i=1,...,N. The equations include a stochastic autocatalytic term (representing investments), a drift term (representing social security payments), and a time dependent saturation term (due to the finite size of the economy). The wi's turn out to exhibit a power-law distribution of the form P(w)~w-1-α. It is shown analytically that the exponent α can be expressed as a function of one parameter, which is the ratio between the constant drift component (social security) and the fluctuating component (investments). This result provides a link between the lower and upper cutoffs of this distribution, namely, between the resources available to the poorest and those available to the richest in a given society. The value of α is found to be insensitive to variations in the saturation term, which represent the expansion or contraction of the economy. The results are of much relevance to empirical studies that show that the distribution of the individual wealth in different countries during different periods in the 20th century has followed a power-law distribution with 1<α<2.

  7. Threshold groundwater ages and young water fractions estimated from 3H, 3He, and 14C

    NASA Astrophysics Data System (ADS)

    Kirchner, James; Jasechko, Scott

    2016-04-01

    It is widely recognized that a water sample taken from a running stream is not described by a single age, but rather by a distribution of ages. It is less widely recognized that the same principle holds true for groundwaters, as indicated by the commonly observed discordances between model ages obtained from different tracers (e.g., 3H vs 14C) in the same sample. Water age distributions are often characterized by their mean residence times (MRT's). However, MRT estimates are highly uncertain because they depend on the shape of the assumed residence time distribution (in particular on the thickness of the long-time tail), which is difficult or impossible to constrain with data. Furthermore, because MRT's are typically nonlinear functions of age tracer concentrations, they are subject to aggregation bias. That is, MRT estimates derived from a mixture of waters with different ages (and thus different tracer concentrations) will systematically underestimate the mixture's true mean age. Here, building on recent work with stable isotope tracers in surface waters [1-3], we present a new framework for using 3H, 3He and 14C to characterize groundwater age distributions. Rather than describing groundwater age distributions by their MRT, we characterize them by the fraction of the distribution that is younger or older than a threshold age. The threshold age that separates "young" from "old" water depends on the characteristics of the specific tracer, including its history of atmospheric inputs. Our approach depends only on whether a given slice of the age distribution is younger or older than the threshold age, but not on how much younger or older it is. Thus our approach is insensitive to the tails of the age distribution, and is therefore relatively unaffected by uncertainty in the distribution's shape. Here we show that concentrations of 3H, 3He, and 14C are almost linearly related to the fractions of water that are younger or older than specified threshold ages. These "young" and "old" water fractions are therefore immune to the aggregation bias that afflicts MRT estimates. They are also relatively insensitive to the shape of the assumed residence time distribution. We apply this approach to 3H and 14C measurements from ˜5000 wells in ˜200 aquifers around the world. Our results show that even very old groundwaters, with 14C ages of thousands of years, often contain significant amounts of much younger water, with a substantial fraction of their age distributions younger than ˜100 years old. Thus despite being very old on average, these groundwaters may also be vulnerable to relatively recent contamination. [1] Kirchner J.W., Aggregation in environmental systems: Catchment mean transit times and young water fractions under hydrologic nonstationarity, Hydrology and Earth System Sciences, in press. [2] Kirchner J.W., Aggregation in environmental systems: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments, Hydrology and Earth System Sciences, in press. [3] Jasechko S., Kirchner J.W., Welker J.M., and McDonnell J.J., Substantial young streamflow in global rivers, Nature Geoscience, in press.

  8. Physics in space-time with scale-dependent metrics

    NASA Astrophysics Data System (ADS)

    Balankin, Alexander S.

    2013-10-01

    We construct three-dimensional space Rγ3 with the scale-dependent metric and the corresponding Minkowski space-time Mγ,β4 with the scale-dependent fractal (DH) and spectral (DS) dimensions. The local derivatives based on scale-dependent metrics are defined and differential vector calculus in Rγ3 is developed. We state that Mγ,β4 provides a unified phenomenological framework for dimensional flow observed in quite different models of quantum gravity. Nevertheless, the main attention is focused on the special case of flat space-time M1/3,14 with the scale-dependent Cantor-dust-like distribution of admissible states, such that DH increases from DH=2 on the scale ≪ℓ0 to DH=4 in the infrared limit ≫ℓ0, where ℓ0 is the characteristic length (e.g. the Planck length, or characteristic size of multi-fractal features in heterogeneous medium), whereas DS≡4 in all scales. Possible applications of approach based on the scale-dependent metric to systems of different nature are briefly discussed.

  9. Modelling conflicts with cluster dynamics in networks

    NASA Astrophysics Data System (ADS)

    Tadić, Bosiljka; Rodgers, G. J.

    2010-12-01

    We introduce cluster dynamical models of conflicts in which only the largest cluster can be involved in an action. This mimics the situations in which an attack is planned by a central body, and the largest attack force is used. We study the model in its annealed random graph version, on a fixed network, and on a network evolving through the actions. The sizes of actions are distributed with a power-law tail, however, the exponent is non-universal and depends on the frequency of actions and sparseness of the available connections between units. Allowing the network reconstruction over time in a self-organized manner, e.g., by adding the links based on previous liaisons between units, we find that the power-law exponent depends on the evolution time of the network. Its lower limit is given by the universal value 5/2, derived analytically for the case of random fragmentation processes. In the temporal patterns behind the size of actions we find long-range correlations in the time series of the number of clusters and the non-trivial distribution of time that a unit waits between two actions. In the case of an evolving network the distribution develops a power-law tail, indicating that through repeated actions, the system develops an internal structure with a hierarchy of units.

  10. Empirical analysis and modeling of manual turnpike tollbooths in China

    NASA Astrophysics Data System (ADS)

    Zhang, Hao

    2017-03-01

    To deal with low-level of service satisfaction at tollbooths of many turnpikes in China, we conduct an empirical study and use a queueing model to investigate performance measures. In this paper, we collect archived data from six tollbooths of a turnpike in China. Empirical analysis on vehicle's time-dependent arrival process and collector's time-dependent service time is conducted. It shows that the vehicle arrival process follows a non-homogeneous Poisson process while the collector service time follows a log-normal distribution. Further, we model the process of collecting tolls at tollbooths with MAP / PH / 1 / FCFS queue for mathematical tractability and present some numerical examples.

  11. Time-dependent seismic hazard analysis for the Greater Tehran and surrounding areas

    NASA Astrophysics Data System (ADS)

    Jalalalhosseini, Seyed Mostafa; Zafarani, Hamid; Zare, Mehdi

    2018-01-01

    This study presents a time-dependent approach for seismic hazard in Tehran and surrounding areas. Hazard is evaluated by combining background seismic activity, and larger earthquakes may emanate from fault segments. Using available historical and paleoseismological data or empirical relation, the recurrence time and maximum magnitude of characteristic earthquakes for the major faults have been explored. The Brownian passage time (BPT) distribution has been used to calculate equivalent fictitious seismicity rate for major faults in the region. To include ground motion uncertainty, a logic tree and five ground motion prediction equations have been selected based on their applicability in the region. Finally, hazard maps have been presented.

  12. High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza R.; Nishikawa, Hiroaki

    2014-01-01

    In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.

  13. Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)

    NASA Astrophysics Data System (ADS)

    Winstral, A. H.; Marks, D. G.; Gurney, R. J.

    2009-12-01

    The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.

  14. Recovery time in quantum dynamics of wave packets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strekalov, M. L., E-mail: strekalov@kinetics.nsc.ru

    2017-01-15

    A wave packet formed by a linear superposition of bound states with an arbitrary energy spectrum returns arbitrarily close to the initial state after a quite long time. A method in which quantum recovery times are calculated exactly is developed. In particular, an exact analytic expression is derived for the recovery time in the limiting case of a two-level system. In the general case, the reciprocal recovery time is proportional to the Gauss distribution that depends on two parameters (mean value and variance of the return probability). The dependence of the recovery time on the mean excitation level of themore » system is established. The recovery time is the longest for the maximal excitation level.« less

  15. Freight distribution problems in congested urban areas : fast and effective solution procedures to time-dependent vehicle routing problems

    DOT National Transportation Integrated Search

    2011-01-01

    Congestion is a common phenomenon in all medium to large cities of the world. Reliability of freight movement in urban areas is an important : issue to manufacturing or service companies whose operation is based in just-in-time approaches. These comp...

  16. Use of the Wigner-Ville distribution in interpreting and identifying ULF waves in triaxial magnetic records

    NASA Astrophysics Data System (ADS)

    Chi, P. J.; Russell, C. T.

    2008-01-01

    Magnetospheric ultra-low-frequency (ULF) waves (f = 1 mHz to 1 Hz) exhibit highly time-dependent characteristics due to the dynamic properties of these waves and, for observations in space, the spacecraft motion. These time-dependent features may not be properly resolved by conventional Fourier techniques. In this study we examine how the Wigner-Ville distribution (WVD) can be used to analyze ULF waves. We find that this approach has unique advantages over the conventional Fourier spectrograms and wavelet scalograms. In particular, for Pc1 wave packets, field line/cavity mode resonances in the Pc 3-4 band, and Pi2 pulsations, the start and end times of each wave packet can be well identified and the frequency better defined. In addition, we demonstrate that the Wigner-Ville distribution can be used to calculate the polarization of wave signals in triaxial magnetic field data in a way analogous to Fourier analysis. Motivated by the large amount of ULF wave observations, we have also developed a WVD-based algorithm to identify ULF waves as a way to facilitate the rapid processing of the data collected by satellite missions and the vast network of ground magnetometers.

  17. A real-time diagnostic and performance monitor for UNIX. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dong, Hongchao

    1992-01-01

    There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance.

  18. Solidification of a binary mixture

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1982-01-01

    The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.

  19. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  20. Studies on energy transfer in dendrimer supermolecule using classical random walk model and Eyring model

    NASA Astrophysics Data System (ADS)

    Rana, Dipankar; Gangopadhyay, Gautam

    2003-01-01

    We have analyzed the energy transfer process in a dendrimer supermolecule using a classical random walk model and an Eyring model of membrane permeation. Here the energy transfer is considered as a multiple barrier crossing process by thermal hopping on the backbone of a cayley tree. It is shown that the mean residence time and mean first passage time, which involve explicit local escape rates, depend upon the temperature, size of the molecule, core branching, and the nature of the potential energy landscape along the cayley tree architecture. The effect of branching tries to create a uniform distribution of mean residence time over the generations and the distribution depends upon the interplay of funneling and local rates of transitions. The calculation of flux at the steady state from the Eyring model also gives a useful idea about the rate when the dendrimeric system is considered as an open system where the core is absorbing the transported energy like a photosynthetic reaction center and a continuous supply of external energy is maintained at the peripheral nodes. The effect of the above parameters of the system are shown to depend on the steady-state flux that has a qualitative resemblence with the result of the mean first passage time approach.

  1. Time-dependent sorption of two novel fungicides in soils within a regulatory framework.

    PubMed

    Gulkowska, Anna; Buerge, Ignaz J; Poiger, Thomas; Kasteel, Roy

    2016-12-01

    Convincing experimental evidence suggests increased sorption of pesticides on soil over time, which, so far, has not been considered in the regulatory assessment of leaching to groundwater. Recently, Beulke and van Beinum (2012) proposed a guidance on how to conduct, analyse and use time-dependent sorption studies in pesticide registration. The applicability of the recommended experimental set-up and fitting procedure was examined for two fungicides, penflufen and fluxapyroxad, in four soils during a 170 day incubation experiment. The apparent distribution coefficient increased by a factor of 2.5-4.5 for penflufen and by a factor of 2.5-2.8 for fluxapyroxad. The recommended two-site, one-rate sorption model adequately described measurements of total mass and liquid phase concentration in the calcium chloride suspension and the calculated apparent distribution coefficient, passing all prescribed quality criteria for model fit and parameter reliability. The guidance is technically mature regarding the experimental set-up and parameterisation of the sorption model for the two moderately mobile and relatively persistent fungicides under investigation. These parameters can be used for transport modelling in soil, thereby recognising the existence of the experimentally observed, but in the regulatory leaching assessment of pesticides not yet routinely considered phenomenon of time-dependent sorption. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  2. Smooth DNA transport through a narrowed pore geometry.

    PubMed

    Carson, Spencer; Wilson, James; Aksimentiev, Aleksei; Wanunu, Meni

    2014-11-18

    Voltage-driven transport of double-stranded DNA through nanoscale pores holds much potential for applications in quantitative molecular biology and biotechnology, yet the microscopic details of translocation have proven to be challenging to decipher. Earlier experiments showed strong dependence of transport kinetics on pore size: fast regular transport in large pores (> 5 nm diameter), and slower yet heterogeneous transport time distributions in sub-5 nm pores, which imply a large positional uncertainty of the DNA in the pore as a function of the translocation time. In this work, we show that this anomalous transport is a result of DNA self-interaction, a phenomenon that is strictly pore-diameter dependent. We identify a regime in which DNA transport is regular, producing narrow and well-behaved dwell-time distributions that fit a simple drift-diffusion theory. Furthermore, a systematic study of the dependence of dwell time on DNA length reveals a single power-law scaling of 1.37 in the range of 35-20,000 bp. We highlight the resolution of our nanopore device by discriminating via single pulses 100 and 500 bp fragments in a mixture with >98% accuracy. When coupled to an appropriate sequence labeling method, our observation of smooth DNA translocation can pave the way for high-resolution DNA mapping and sizing applications in genomics.

  3. Reclamation of Synthetic Turbine Engine Lubricants.

    DTIC Science & Technology

    1981-08-01

    test in which the comparison of lubricants is based upon differences in degradation levels produced under a fixed time/temperature condition. Referring...release; distribution unlimited 17. DISTRIBUTION STATEMENT (of the abstract entered in Block 20, II different from Report) 18 SUPPLEMENTARY NOTES 19 KEY...plant. Division in this respect was entirely fortuitous, depending only upon convenience in handling and inspecting the barrels at different delivery

  4. Geometric evolution of complex networks with degree correlations

    NASA Astrophysics Data System (ADS)

    Murphy, Charles; Allard, Antoine; Laurence, Edward; St-Onge, Guillaume; Dubé, Louis J.

    2018-03-01

    We present a general class of geometric network growth mechanisms by homogeneous attachment in which the links created at a given time t are distributed homogeneously between a new node and the existing nodes selected uniformly. This is achieved by creating links between nodes uniformly distributed in a homogeneous metric space according to a Fermi-Dirac connection probability with inverse temperature β and general time-dependent chemical potential μ (t ) . The chemical potential limits the spatial extent of newly created links. Using a hidden variable framework, we obtain an analytical expression for the degree sequence and show that μ (t ) can be fixed to yield any given degree distributions, including a scale-free degree distribution. Additionally, we find that depending on the order in which nodes appear in the network—its history—the degree-degree correlations can be tuned to be assortative or disassortative. The effect of the geometry on the structure is investigated through the average clustering coefficient 〈c 〉 . In the thermodynamic limit, we identify a phase transition between a random regime where 〈c 〉→0 when β <βc and a geometric regime where 〈c 〉>0 when β >βc .

  5. Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2016-01-01

    We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1]. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature. In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter.

  6. A size-structured model of bacterial growth and reproduction.

    PubMed

    Ellermeyer, S F; Pilyugin, S S

    2012-01-01

    We consider a size-structured bacterial population model in which the rate of cell growth is both size- and time-dependent and the average per capita reproduction rate is specified as a model parameter. It is shown that the model admits classical solutions. The population-level and distribution-level behaviours of these solutions are then determined in terms of the model parameters. The distribution-level behaviour is found to be different from that found in similar models of bacterial population dynamics. Rather than convergence to a stable size distribution, we find that size distributions repeat in cycles. This phenomenon is observed in similar models only under special assumptions on the functional form of the size-dependent growth rate factor. Our main results are illustrated with examples, and we also provide an introductory study of the bacterial growth in a chemostat within the framework of our model.

  7. The influence of non-Gaussian distribution functions on the time-dependent perpendicular transport of energetic particles

    NASA Astrophysics Data System (ADS)

    Lasuik, J.; Shalchi, A.

    2018-06-01

    In the current paper we explore the influence of the assumed particle statistics on the transport of energetic particles across a mean magnetic field. In previous work the assumption of a Gaussian distribution function was standard, although there have been known cases for which the transport is non-Gaussian. In the present work we combine a kappa distribution with the ordinary differential equation provided by the so-called unified non-linear transport theory. We then compute running perpendicular diffusion coefficients for different values of κ and turbulence configurations. We show that changing the parameter κ slightly increases or decreases the perpendicular diffusion coefficient depending on the considered turbulence configuration. Since these changes are small, we conclude that the assumed statistics is less significant in particle transport theory. The results obtained in the current paper support to use a Gaussian distribution function as usually done in particle transport theory.

  8. Guest Editor's Introduction: Special section on dependable distributed systems

    NASA Astrophysics Data System (ADS)

    Fetzer, Christof

    1999-09-01

    We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)

  9. Modelling the effects of cerebral microvasculature morphology on oxygen transport.

    PubMed

    Park, Chang Sub; Payne, Stephen J

    2016-01-01

    The cerebral microvasculature plays a vital role in adequately supplying blood to the brain. Determining the health of the cerebral microvasculature is important during pathological conditions, such as stroke and dementia. Recent studies have shown the complex relationship between cerebral metabolic rate and transit time distribution, the transit times of all the possible pathways available dependent on network topology. In this paper, we extend a recently developed technique to solve for residue function, the amount of tracer left in the vasculature at any time, and transit time distribution in an existing model of the cerebral microvasculature to calculate cerebral metabolism. We present the mathematical theory needed to solve for oxygen concentration followed by results of the simulations. It is found that oxygen extraction fraction, the fraction of oxygen removed from the blood in the capillary network by the tissue, and cerebral metabolic rate are dependent on both mean and heterogeneity of the transit time distribution. For changes in cerebral blood flow, a positive correlation can be observed between mean transit time and oxygen extraction fraction, and a negative correlation between mean transit time and metabolic rate of oxygen. A negative correlation can also be observed between transit time heterogeneity and the metabolic rate of oxygen for a constant cerebral blood flow. A sensitivity analysis on the mean and heterogeneity of the transit time distribution was able to quantify their respective contributions to oxygen extraction fraction and metabolic rate of oxygen. Mean transit time has a greater contribution than the heterogeneity for oxygen extraction fraction. This is found to be opposite for metabolic rate of oxygen. These results provide information on the role of the cerebral microvasculature and its effects on flow and metabolism. They thus open up the possibility of obtaining additional valuable clinical information for diagnosing and treating cerebrovascular diseases. Copyright © 2015. Published by Elsevier Ltd.

  10. Analysis of domestic refrigerator temperatures and home storage time distributions for shelf-life studies and food safety risk assessment.

    PubMed

    Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie

    2017-06-01

    In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Role of Proteome Physical Chemistry in Cell Behavior.

    PubMed

    Ghosh, Kingshuk; de Graff, Adam M R; Sawle, Lucas; Dill, Ken A

    2016-09-15

    We review how major cell behaviors, such as bacterial growth laws, are derived from the physical chemistry of the cell's proteins. On one hand, cell actions depend on the individual biological functionalities of their many genes and proteins. On the other hand, the common physics among proteins can be as important as the unique biology that distinguishes them. For example, bacterial growth rates depend strongly on temperature. This dependence can be explained by the folding stabilities across a cell's proteome. Such modeling explains how thermophilic and mesophilic organisms differ, and how oxidative damage of highly charged proteins can lead to unfolding and aggregation in aging cells. Cells have characteristic time scales. For example, E. coli can duplicate as fast as 2-3 times per hour. These time scales can be explained by protein dynamics (the rates of synthesis and degradation, folding, and diffusional transport). It rationalizes how bacterial growth is slowed down by added salt. In the same way that the behaviors of inanimate materials can be expressed in terms of the statistical distributions of atoms and molecules, some cell behaviors can be expressed in terms of distributions of protein properties, giving insights into the microscopic basis of growth laws in simple cells.

  12. The Local Structure of Globalization. The Network Dynamics of Foreign Direct Investments in the International Electricity Industry

    NASA Astrophysics Data System (ADS)

    Koskinen, Johan; Lomi, Alessandro

    2013-05-01

    We study the evolution of the network of foreign direct investment (FDI) in the international electricity industry during the period 1994-2003. We assume that the ties in the network of investment relations between countries are created and deleted in continuous time, according to a conditional Gibbs distribution. This assumption allows us to take simultaneously into account the aggregate predictions of the well-established gravity model of international trade as well as local dependencies between network ties connecting the countries in our sample. According to the modified version of the gravity model that we specify, the probability of observing an investment tie between two countries depends on the mass of the economies involved, their physical distance, and the tendency of the network to self-organize into local configurations of network ties. While the limiting distribution of the data generating process is an exponential random graph model, we do not assume the system to be in equilibrium. We find evidence of the effects of the standard gravity model of international trade on evolution of the global FDI network. However, we also provide evidence of significant dyadic and extra-dyadic dependencies between investment ties that are typically ignored in available research. We show that local dependencies between national electricity industries are sufficient for explaining global properties of the network of foreign direct investments. We also show, however, that network dependencies vary significantly over time giving rise to a time-heterogeneous localized process of network evolution.

  13. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  14. Numerical calculations of velocity and pressure distribution around oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.; Kobiske, M.

    1974-01-01

    An analytical procedure based on the Navier-Stokes equations was developed for analyzing and representing properties of unsteady viscous flow around oscillating obstacles. A variational formulation of the vorticity transport equation was discretized in finite element form and integrated numerically. At each time step of the numerical integration, the velocity field around the obstacle was determined for the instantaneous vorticity distribution from the finite element solution of Poisson's equation. The time-dependent boundary conditions around the oscillating obstacle were introduced as external constraints, using the Lagrangian Multiplier Technique, at each time step of the numerical integration. The procedure was then applied for determining pressures around obstacles oscillating in unsteady flow. The obtained results for a cylinder and an airfoil were illustrated in the form of streamlines and vorticity and pressure distributions.

  15. LOCAL MAGNETIC BEHAVIOR OF 54Fe in EuFe2As2 AND Eu0.5K0.5Fe2As2: MICROSCOPIC STUDY USING TIME DIFFERENTIAL PERTURBED ANGULAR DISTRIBUTION (TDPAD) SPECTROSCOPY

    NASA Astrophysics Data System (ADS)

    Mohanta, S. K.; Mishra, S. N.; Davane, S. M.; Layek, S.; Hossain, Z.

    2013-12-01

    In this paper, we report the time differential perturbed angular distribution measurements of 54Fe on a polycrystalline EuFe2As2 and Eu0.5K0.5Fe2As2. The hyperfine field and nuclear spin-relaxation rate are strongly temperature dependent in the paramagnetic state suggesting strong spin fluctuation in the parent compound. The local susceptibility show Curie-Weiss-like temperature dependence and Korringa-like relaxation in the tetragonal phase indicating the presence of local moment. In the orthorhombic phase, the hyperfine field behavior suggesting quasi two-dimensional magnetic ordering. The experimental results are in a good agreement with first-principle calculations based on density functional theory.

  16. Quantum coherent control of the photoelectron angular distribution in bichromatic-field ionization of atomic neon

    NASA Astrophysics Data System (ADS)

    Gryzlova, E. V.; Grum-Grzhimailo, A. N.; Staroselskaya, E. I.; Douguet, N.; Bartschat, K.

    2018-01-01

    We investigate the coherent control of the photoelectron angular distribution in bichromatic atomic ionization. Neon is selected as target since it is one of the most popular systems in current gas-phase experiments with free-electron lasers (FELSs). In particular, we tackle practical questions, such as the role of the fine-structure splitting, the pulse length, and the intensity. Time-dependent and stationary perturbation theory are employed, and we also solve the time-dependent Schrödinger equation in a single-active electron model. We consider neon ionized by a FEL pulse whose fundamental frequency is in resonance with either 2 p -3 s or 2 p -4 s excitation. The contribution of the nonresonant two-photon process and its potential constructive or destructive role for quantum coherent control is investigated.

  17. Study on ion energy distribution in low-frequency oscillation time scale of Hall thrusters

    NASA Astrophysics Data System (ADS)

    Wei, Liqiu; Li, Wenbo; Ding, Yongjie; Han, Liang; Yu, Daren; Cao, Yong

    2017-11-01

    This paper reports on the dynamic characteristics of the distribution of ion energy during Hall thruster discharge in the low-frequency oscillation time scale through experimental studies, and a statistical analysis of the time-varying peak and width of ion energy and the ratio of high-energy ions during the low-frequency oscillation. The results show that the ion energy distribution exhibits a periodic change during the low-frequency oscillation. Moreover, the variation in the ion energy peak is opposite to that of the discharge current, and the variations in width of the ion energy distribution and the ratio of high-energy ions are consistent with that of the discharge current. The variation characteristics of the ion density and discharge potential were simulated by one-dimensional hybrid-direct kinetic simulations; the simulation results and analysis indicate that the periodic change in the distribution of ion energy during the low-frequency oscillation depends on the relationship between the ionization source term and discharge potential distribution during ionization in the discharge channel.

  18. Generalization of the lightning electromagnetic equations of Uman, McLain, and Krider based on Jefimenko equations

    DOE PAGES

    Shao, Xuan-Min

    2016-04-12

    The fundamental electromagnetic equations used by lightning researchers were introduced in a seminal paper by Uman, McLain, and Krider in 1975. However, these equations were derived for an infinitely thin, one-dimensional source current, and not for a general three-dimensional current distribution. In this paper, we introduce a corresponding pair of generalized equations that are determined from a three-dimensional, time-dependent current density distribution based on Jefimenko's original electric and magnetic equations. To do this, we derive the Jefimenko electric field equation into a new form that depends only on the time-dependent current density similar to that of Uman, McLain, and Krider,more » rather than on both the charge and current densities in its original form. The original Jefimenko magnetic field equation depends only on current, so no further derivation is needed. We show that the equations of Uman, McLain, and Krider can be readily obtained from the generalized equations if a one-dimensional source current is considered. For the purpose of practical applications, we discuss computational implementation of the new equations and present electric field calculations for a three-dimensional, conical-shape discharge.« less

  19. Double inverse-weighted estimation of cumulative treatment effects under nonproportional hazards and dependent censoring.

    PubMed

    Schaubel, Douglas E; Wei, Guanghui

    2011-03-01

    In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.

  20. Second-order Boltzmann equation: gauge dependence and gauge invariance

    NASA Astrophysics Data System (ADS)

    Naruko, Atsushi; Pitrou, Cyril; Koyama, Kazuya; Sasaki, Misao

    2013-08-01

    In the context of cosmological perturbation theory, we derive the second-order Boltzmann equation describing the evolution of the distribution function of radiation without a specific gauge choice. The essential steps in deriving the Boltzmann equation are revisited and extended given this more general framework: (i) the polarization of light is incorporated in this formalism by using a tensor-valued distribution function; (ii) the importance of a choice of the tetrad field to define the local inertial frame in the description of the distribution function is emphasized; (iii) we perform a separation between temperature and spectral distortion, both for the intensity and polarization for the first time; (iv) the gauge dependence of all perturbed quantities that enter the Boltzmann equation is derived, and this enables us to check the correctness of the perturbed Boltzmann equation by explicitly showing its gauge-invariance for both intensity and polarization. We finally discuss several implications of the gauge dependence for the observed temperature.

  1. An ensemble survival model for estimating relative residual longevity following stroke: Application to mortality data in the chronic dialysis population

    PubMed Central

    Phadnis, Milind A; Wetmore, James B; Shireman, Theresa I; Ellerbeck, Edward F; Mahnken, Jonathan D

    2016-01-01

    Time-dependent covariates can be modeled within the Cox regression framework and can allow both proportional and nonproportional hazards for the risk factor of research interest. However, in many areas of health services research, interest centers on being able to estimate residual longevity after the occurrence of a particular event such as stroke. The survival trajectory of patients experiencing a stroke can be potentially influenced by stroke type (hemorrhagic or ischemic), time of the stroke (relative to time zero), time since the stroke occurred, or a combination of these factors. In such situations, researchers are more interested in estimating lifetime lost due to stroke rather than merely estimating the relative hazard due to stroke. To achieve this, we propose an ensemble approach using the generalized gamma distribution by means of a semi-Markov type model with an additive hazards extension. Our modeling framework allows stroke as a time-dependent covariate to affect all three parameters (location, scale, and shape) of the generalized gamma distribution. Using the concept of relative times, we answer the research question by estimating residual life lost due to ischemic and hemorrhagic stroke in the chronic dialysis population. PMID:26403934

  2. An ensemble survival model for estimating relative residual longevity following stroke: Application to mortality data in the chronic dialysis population.

    PubMed

    Phadnis, Milind A; Wetmore, James B; Shireman, Theresa I; Ellerbeck, Edward F; Mahnken, Jonathan D

    2017-12-01

    Time-dependent covariates can be modeled within the Cox regression framework and can allow both proportional and nonproportional hazards for the risk factor of research interest. However, in many areas of health services research, interest centers on being able to estimate residual longevity after the occurrence of a particular event such as stroke. The survival trajectory of patients experiencing a stroke can be potentially influenced by stroke type (hemorrhagic or ischemic), time of the stroke (relative to time zero), time since the stroke occurred, or a combination of these factors. In such situations, researchers are more interested in estimating lifetime lost due to stroke rather than merely estimating the relative hazard due to stroke. To achieve this, we propose an ensemble approach using the generalized gamma distribution by means of a semi-Markov type model with an additive hazards extension. Our modeling framework allows stroke as a time-dependent covariate to affect all three parameters (location, scale, and shape) of the generalized gamma distribution. Using the concept of relative times, we answer the research question by estimating residual life lost due to ischemic and hemorrhagic stroke in the chronic dialysis population.

  3. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.

  4. Velocity distribution of electrons in time-varying low-temperature plasmas: progress in theoretical procedures over the past 70 years

    NASA Astrophysics Data System (ADS)

    Makabe, Toshiaki

    2018-03-01

    A time-varying low-temperature plasma sustained by electrical powers with various kinds of fRequencies has played a key role in the historical development of new technologies, such as gas lasers, ozonizers, micro display panels, dry processing of materials, medical care, and so on, since World War II. Electrons in a time-modulated low-temperature plasma have a proper velocity spectrum, i.e. velocity distribution dependent on the microscopic quantum characteristics of the feed gas molecule and on the external field strength and the frequency. In order to solve and evaluate the time-varying velocity distribution, we have mostly two types of theoretical methods based on the classical and linear Boltzmann equations, namely, the expansion method using the orthogonal function and the procedure of non-expansional temporal evolution. Both methods have been developed discontinuously and progressively in synchronization with those technological developments. In this review, we will explore the historical development of the theoretical procedure to evaluate the electron velocity distribution in a time-varying low-temperature plasma over the past 70 years.

  5. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  6. The dependence of Islamic and conventional stocks: A copula approach

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-09-01

    Recent studies have found that Islamic stocks are dependent on conventional stocks and they appear to be more risky. In Asia, particularly in Islamic countries, research on dependence involving Islamic and non-Islamic stock markets is limited. The objective of this study is to investigate the dependence between financial times stock exchange Hijrah Shariah index and conventional stocks (EMAS and KLCI indices). Using the copula approach and a time series model for each marginal distribution function, the copula parameters were estimated. The Elliptical copula was selected to present the dependence structure of each pairing of the Islamic stock and conventional stock. Specifically, the Islamic versus conventional stocks (Shariah-EMAS and Shariah-KLCI) had lower dependence compared to conventional versus conventional stocks (EMAS-KLCI). These findings suggest that the occurrence of shocks in a conventional stock will not have strong impact on the Islamic stock.

  7. Investigation on the effects of temperature dependency of material parameters on a thermoelastic loading problem

    NASA Astrophysics Data System (ADS)

    Kumar, Anil; Mukhopadhyay, Santwana

    2017-08-01

    The present work is concerned with the investigation of thermoelastic interactions inside a spherical shell with temperature-dependent material parameters. We employ the heat conduction model with a single delay term. The problem is studied by considering three different kinds of time-dependent temperature and stress distributions applied at the inner and outer surfaces of the shell. The problem is formulated by considering that the thermal properties vary as linear function of temperature that yield nonlinear governing equations. The problem is solved by applying Kirchhoff transformation along with integral transform technique. The numerical results of the field variables are shown in the different graphs to study the influence of temperature-dependent thermal parameters in various cases. It has been shown that the temperature-dependent effect is more prominent in case of stress distribution as compared to other fields and also the effect is significant in case of thermal shock applied at the two boundary surfaces of the spherical shell.

  8. Dependence of atmospheric refractive index structure parameter (Cn2) on the residence time and vertical distribution of aerosols.

    PubMed

    Anand, N; Satheesh, S K; Krishna Moorthy, K

    2017-07-15

    Effects of absorbing atmospheric aerosols in modulating the tropospheric refractive index structure parameter (Cn2) are estimated using high resolution radiosonde and multi-satellite data along with a radiative transfer model. We report the influence of variations in residence time and vertical distribution of aerosols in modulating Cn2 and why the aerosol induced atmospheric heating needs to be considered while estimating a free space optical communication link budget. The results show that performance of the link is seriously affected if large concentrations of absorbing aerosols reside for a long time in the atmospheric path.

  9. Time-dependent resilience assessment and improvement of urban infrastructure systems

    NASA Astrophysics Data System (ADS)

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  10. Time-dependent resilience assessment and improvement of urban infrastructure systems.

    PubMed

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  11. Time-dependent strains and stresses in a pumpkin balloon

    NASA Technical Reports Server (NTRS)

    Gerngross, T.; Xu, Y.; Pellegrino, S.

    2006-01-01

    This paper presents a study of pumpkin-shaped superpressure balloons, consisting of gores made from a thin polymeric film attached to high stiffness, meridional tendons. This type of design is being used for the NASA ULDB balloons. The gore film shows considerable time-dependent stress relaxation, whereas the behaviour of the tendons is essentially time-independent. Upon inflation and pressurization, the "instantaneous", i.e. linear-elastic strain and stress distribution in the film show significantly higher values in the meridional direction. However, over time, and due to the biaxial visco-elastic stress relaxation of the the material, the hoop strains increase and the meridional stresses decrease, whereas the remaining strain and stress components remain substantially unchanged. These results are important for a correct assessment of the structural integrity of a pumpkin balloon in a long-duration mission, both in terms of the material performance and the overall stability of the shape of the balloon. An experimental investigation of the time dependence of the biaxial strain distribution in the film of a 4 m diameter, 48 gore pumpkin balloon is presented. The inflated shape of selected gores has been measured using photogrammetry and the time variation in strain components at some particular points of these gores has been measured under constant pressure and temperature. The results show good correlation with a numerical study, using the ABAQUS finite-element package, that includes a widely used model of the visco-elastic response of the gore material:

  12. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    NASA Astrophysics Data System (ADS)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  13. On the connection between financial processes with stochastic volatility and nonextensive statistical mechanics

    NASA Astrophysics Data System (ADS)

    Queirós, S. M. D.; Tsallis, C.

    2005-11-01

    The GARCH algorithm is the most renowned generalisation of Engle's original proposal for modelising returns, the ARCH process. Both cases are characterised by presenting a time dependent and correlated variance or volatility. Besides a memory parameter, b, (present in ARCH) and an independent and identically distributed noise, ω, GARCH involves another parameter, c, such that, for c=0, the standard ARCH process is reproduced. In this manuscript we use a generalised noise following a distribution characterised by an index qn, such that qn=1 recovers the Gaussian distribution. Matching low statistical moments of GARCH distribution for returns with a q-Gaussian distribution obtained through maximising the entropy Sq=1-sumipiq/q-1, basis of nonextensive statistical mechanics, we obtain a sole analytical connection between q and left( b,c,qnright) which turns out to be remarkably good when compared with computational simulations. With this result we also derive an analytical approximation for the stationary distribution for the (squared) volatility. Using a generalised Kullback-Leibler relative entropy form based on Sq, we also analyse the degree of dependence between successive returns, zt and zt+1, of GARCH(1,1) processes. This degree of dependence is quantified by an entropic index, qop. Our analysis points the existence of a unique relation between the three entropic indexes qop, q and qn of the problem, independent of the value of (b,c).

  14. Simulations of a dynamic solar cycle and its effects on the interstellar boundary explorer ribbon and globally distributed energetic neutral atom flux

    DOE PAGES

    Zirnstein, E. J.; Heerikhuisen, J.; Pogorelov, N. V.; ...

    2015-04-23

    Observations by the Interstellar Boundary Explorer (IBEX) have vastly improved our understanding of the interaction between the solar wind (SW) and local interstellar medium through direct measurements of energetic neutral atoms (ENAs); this informs us about the heliospheric conditions that produced them. An enhanced feature of flux in the sky, the so-called IBEX ribbon, was not predicted by any global models before the first IBEX observations. A dominating theory of the origin of the ribbon, although still under debate, is a secondary charge-exchange process involving secondary ENAs originating from outside the heliopause. According to this mechanism, the evolution of themore » solar cycle should be visible in the ribbon flux. Therefore, in this paper we simulate a fully time-dependent ribbon flux, as well as globally distributed flux from the inner heliosheath (IHS), using time-dependent SW parameters from Sokol et al. as boundary conditions for our time-dependent heliosphere simulation. After post-processing the results to compute H ENA fluxes, these results show that the secondary ENA ribbon indeed should be time dependent, evolving with a period of approximately 11 yr, with differences depending on the energy and direction. Our results for the IHS flux show little periodic change with the 11 yr solar cycle, but rather with short-term fluctuations in the background plasma. And, while the secondary ENA mechanism appears to emulate several key characteristics of the observed IBEX ribbon, it appears that our simulation does not yet include all of the relevant physics that produces the observed ribbon.« less

  15. SIMULATIONS OF A DYNAMIC SOLAR CYCLE AND ITS EFFECTS ON THE INTERSTELLAR BOUNDARY EXPLORER RIBBON AND GLOBALLY DISTRIBUTED ENERGETIC NEUTRAL ATOM FLUX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zirnstein, E. J.; Heerikhuisen, J.; Pogorelov, N. V.

    2015-05-01

    Since 2009, observations by the Interstellar Boundary Explorer (IBEX) have vastly improved our understanding of the interaction between the solar wind (SW) and local interstellar medium through direct measurements of energetic neutral atoms (ENAs), which inform us about the heliospheric conditions that produced them. An enhanced feature of flux in the sky, the so-called IBEX ribbon, was not predicted by any global models before the first IBEX observations. A dominating theory of the origin of the ribbon, although still under debate, is a secondary charge-exchange process involving secondary ENAs originating from outside the heliopause. According to this mechanism, the evolutionmore » of the solar cycle should be visible in the ribbon flux. Therefore, in this paper we simulate a fully time-dependent ribbon flux, as well as globally distributed flux from the inner heliosheath (IHS), using time-dependent SW parameters from Sokół et al. as boundary conditions for our time-dependent heliosphere simulation. After post-processing the results to compute H ENA fluxes, our results show that the secondary ENA ribbon indeed should be time dependent, evolving with a period of approximately 11 yr, with differences depending on the energy and direction. Our results for the IHS flux show little periodic change with the 11 yr solar cycle, but rather with short-term fluctuations in the background plasma. While the secondary ENA mechanism appears to emulate several key characteristics of the observed IBEX ribbon, it appears that our simulation does not yet include all of the relevant physics that produces the observed ribbon.« less

  16. Risk of portfolio with simulated returns based on copula model

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-02-01

    The commonly used tool for measuring risk of a portfolio with equally weighted stocks is variance-covariance method. Under extreme circumstances, this method leads to significant underestimation of actual risk due to its multivariate normality assumption of the joint distribution of stocks. The purpose of this research is to compare the actual risk of portfolio with the simulated risk of portfolio in which the joint distribution of two return series is predetermined. The data used is daily stock prices from the ASEAN market for the period January 2000 to December 2012. The copula approach is applied to capture the time varying dependence among the return series. The results shows that the chosen copula families are not suitable to present the dependence structures of each bivariate returns. Exception for the Philippines-Thailand pair where by t copula distribution appears to be the appropriate choice to depict its dependence. Assuming that the t copula distribution is the joint distribution of each paired series, simulated returns is generated and value-at-risk (VaR) is then applied to evaluate the risk of each portfolio consisting of two simulated return series. The VaR estimates was found to be symmetrical due to the simulation of returns via elliptical copula-GARCH approach. By comparison, it is found that the actual risks are underestimated for all pairs of portfolios except for Philippines-Thailand. This study was able to show that disregard of the non-normal dependence structure of two series will result underestimation of actual risk of the portfolio.

  17. Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma.

    PubMed

    Gilson, Matthieu; Fukai, Tomoki

    2011-01-01

    Spike-timing-dependent plasticity (STDP) modifies the weight (or strength) of synaptic connections between neurons and is considered to be crucial for generating network structure. It has been observed in physiology that, in addition to spike timing, the weight update also depends on the current value of the weight. The functional implications of this feature are still largely unclear. Additive STDP gives rise to strong competition among synapses, but due to the absence of weight dependence, it requires hard boundaries to secure the stability of weight dynamics. Multiplicative STDP with linear weight dependence for depression ensures stability, but it lacks sufficiently strong competition required to obtain a clear synaptic specialization. A solution to this stability-versus-function dilemma can be found with an intermediate parametrization between additive and multiplicative STDP. Here we propose a novel solution to the dilemma, named log-STDP, whose key feature is a sublinear weight dependence for depression. Due to its specific weight dependence, this new model can produce significantly broad weight distributions with no hard upper bound, similar to those recently observed in experiments. Log-STDP induces graded competition between synapses, such that synapses receiving stronger input correlations are pushed further in the tail of (very) large weights. Strong weights are functionally important to enhance the neuronal response to synchronous spike volleys. Depending on the input configuration, multiple groups of correlated synaptic inputs exhibit either winner-share-all or winner-take-all behavior. When the configuration of input correlations changes, individual synapses quickly and robustly readapt to represent the new configuration. We also demonstrate the advantages of log-STDP for generating a stable structure of strong weights in a recurrently connected network. These properties of log-STDP are compared with those of previous models. Through long-tail weight distributions, log-STDP achieves both stable dynamics for and robust competition of synapses, which are crucial for spike-based information processing.

  18. Self-energy renormalization for inhomogeneous nonequilibrium systems and field expansion via complete set of time-dependent wavefunctions

    NASA Astrophysics Data System (ADS)

    Kuwahara, Y.; Nakamura, Y.; Yamanaka, Y.

    2018-04-01

    The way to determine the renormalized energy of inhomogeneous systems of a quantum field under an external potential is established for both equilibrium and nonequilibrium scenarios based on thermo field dynamics. The key step is to find an extension of the on-shell concept valid in homogeneous case. In the nonequilibrium case, we expand the field operator by time-dependent wavefunctions that are solutions of the appropriately chosen differential equation, synchronizing with temporal change of thermal situation, and the quantum transport equation is derived from the renormalization procedure. Through numerical calculations of a triple-well model with a reservoir, we show that the number distribution and the time-dependent wavefunctions are relaxed consistently to the correct equilibrium forms at the long-term limit.

  19. FAST TRACK COMMUNICATION: Suppressing anomalous diffusion by cooperation

    NASA Astrophysics Data System (ADS)

    Dybiec, Bartłomiej

    2010-08-01

    Within a continuous time random walk scenario we consider a motion of a complex of particles which moves coherently. The motion of every particle is characterized by the waiting time and jump length distributions which are of the power-law type. Due to the interactions between particles it is assumed that the waiting time is adjusted to the shortest or to the longest waiting time. Analogously, the jump length is adjusted to the shortest or to the longest jump length. We show that adjustment to the shortest waiting time can suppress the subdiffusive behavior even in situations when the exponent characterizing the waiting time distribution assures subdiffusive motion of a single particle. Finally, we demonstrate that the characteristic of the motion depends on the number of particles building a complex.

  20. Following a trend with an exponential moving average: Analytical results for a Gaussian model

    NASA Astrophysics Data System (ADS)

    Grebenkov, Denis S.; Serror, Jeremy

    2014-01-01

    We investigate how price variations of a stock are transformed into profits and losses (P&Ls) of a trend following strategy. In the frame of a Gaussian model, we derive the probability distribution of P&Ls and analyze its moments (mean, variance, skewness and kurtosis) and asymptotic behavior (quantiles). We show that the asymmetry of the distribution (with often small losses and less frequent but significant profits) is reminiscent to trend following strategies and less dependent on peculiarities of price variations. At short times, trend following strategies admit larger losses than one may anticipate from standard Gaussian estimates, while smaller losses are ensured at longer times. Simple explicit formulas characterizing the distribution of P&Ls illustrate the basic mechanisms of momentum trading, while general matrix representations can be applied to arbitrary Gaussian models. We also compute explicitly annualized risk adjusted P&L and strategy turnover to account for transaction costs. We deduce the trend following optimal timescale and its dependence on both auto-correlation level and transaction costs. Theoretical results are illustrated on the Dow Jones index.

  1. Transverse-momentum-dependent quark distribution functions of spin-one targets: Formalism and covariant calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ninomiya, Yu; Bentz, Wolfgang; Cloet, Ian C.

    In this paper, we present a covariant formulation and model calculations of the leading-twist time-reversal even transverse-momentum-dependent quark distribution functions (TMDs) for a spin-one target. Emphasis is placed on a description of these three-dimensional distribution functions which is independent of any constraints on the spin quantization axis. We apply our covariant spin description to all nine leading-twist time-reversal even ρ meson TMDs in the framework provided by the Nambu–Jona-Lasinio model, incorporating important aspects of quark confinement via the infrared cutoff in the proper-time regularization scheme. In particular, the behaviors of the three-dimensional TMDs in a tensor polarized spin-one hadron aremore » illustrated. Sum rules and positivity constraints are discussed in detail. Our results do not exhibit the familiar Gaussian behavior in the transverse momentum, and other results of interest include the finding that the tensor polarized TMDs—associated with spin-one hadrons—are very sensitive to quark orbital angular momentum, and that the TMDs associated with the quark operator γ +γ Tγ 5 would vanish were it not for dynamical chiral symmetry breaking. In addition, we find that 44% of the ρ meson's spin is carried by the orbital angular momentum of the quarks, and that the magnitude of the tensor polarized quark distribution function is about 30% of the unpolarized quark distribution. Finally, a qualitative comparison between our results for the tensor structure of a quark-antiquark bound state is made to existing experimental and theoretical results for the two-nucleon (deuteron) bound state.« less

  2. Transverse-momentum-dependent quark distribution functions of spin-one targets: Formalism and covariant calculations

    DOE PAGES

    Ninomiya, Yu; Bentz, Wolfgang; Cloet, Ian C.

    2017-10-24

    In this paper, we present a covariant formulation and model calculations of the leading-twist time-reversal even transverse-momentum-dependent quark distribution functions (TMDs) for a spin-one target. Emphasis is placed on a description of these three-dimensional distribution functions which is independent of any constraints on the spin quantization axis. We apply our covariant spin description to all nine leading-twist time-reversal even ρ meson TMDs in the framework provided by the Nambu–Jona-Lasinio model, incorporating important aspects of quark confinement via the infrared cutoff in the proper-time regularization scheme. In particular, the behaviors of the three-dimensional TMDs in a tensor polarized spin-one hadron aremore » illustrated. Sum rules and positivity constraints are discussed in detail. Our results do not exhibit the familiar Gaussian behavior in the transverse momentum, and other results of interest include the finding that the tensor polarized TMDs—associated with spin-one hadrons—are very sensitive to quark orbital angular momentum, and that the TMDs associated with the quark operator γ +γ Tγ 5 would vanish were it not for dynamical chiral symmetry breaking. In addition, we find that 44% of the ρ meson's spin is carried by the orbital angular momentum of the quarks, and that the magnitude of the tensor polarized quark distribution function is about 30% of the unpolarized quark distribution. Finally, a qualitative comparison between our results for the tensor structure of a quark-antiquark bound state is made to existing experimental and theoretical results for the two-nucleon (deuteron) bound state.« less

  3. Evolution of Particle Size Distributions in Fragmentation Over Time

    NASA Astrophysics Data System (ADS)

    Charalambous, C. A.; Pike, W. T.

    2013-12-01

    We present a new model of fragmentation based on a probabilistic calculation of the repeated fracture of a particle population. The resulting continuous solution, which is in closed form, gives the evolution of fragmentation products from an initial block, through a scale-invariant power-law relationship to a final comminuted powder. Models for the fragmentation of particles have been developed separately in mainly two different disciplines: the continuous integro-differential equations of batch mineral grinding (Reid, 1965) and the fractal analysis of geophysics (Turcotte, 1986) based on a discrete model with a single probability of fracture. The first gives a time-dependent development of the particle-size distribution, but has resisted a closed-form solution, while the latter leads to the scale-invariant power laws, but with no time dependence. Bird (2009) recently introduced a bridge between these two approaches with a step-wise iterative calculation of the fragmentation products. The development of the particle-size distribution occurs with discrete steps: during each fragmentation event, the particles will repeatedly fracture probabilistically, cascading down the length scales to a final size distribution reached after all particles have failed to further fragment. We have identified this process as the equivalent to a sequence of trials for each particle with a fixed probability of fragmentation. Although the resulting distribution is discrete, it can be reformulated as a continuous distribution in maturity over time and particle size. In our model, Turcotte's power-law distribution emerges at a unique maturation index that defines a regime boundary. Up to this index, the fragmentation is in an erosional regime with the initial particle size setting the scaling. Fragmentation beyond this index is in a regime of comminution with rebreakage of the particles down to the size limit of fracture. The maturation index can increment continuously, for example under grinding conditions, or as discrete steps, such as with impact events. In both cases our model gives the energy associated with the fragmentation in terms of the developing surface area of the population. We show the agreement of our model to the evolution of particle size distributions associated with episodic and continuous fragmentation and how the evolution of some popular fractals may be represented using this approach. C. A. Charalambous and W. T. Pike (2013). Multi-Scale Particle Size Distributions of Mars, Moon and Itokawa based on a time-maturation dependent fragmentation model. Abstract Submitted to the AGU 46th Fall Meeting. Bird, N. R. A., Watts, C. W., Tarquis, A. M., & Whitmore, A. P. (2009). Modeling dynamic fragmentation of soil. Vadose Zone Journal, 8(1), 197-201. Reid, K. J. (1965). A solution to the batch grinding equation. Chemical Engineering Science, 20(11), 953-963. Turcotte, D. L. (1986). Fractals and fragmentation. Journal of Geophysical Research: Solid Earth 91(B2), 1921-1926.

  4. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    PubMed

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  5. Pleistocene evolutionary history of the Clouded Apollo (Parnassius mnemosyne): genetic signatures of climate cycles and a 'time-dependent' mitochondrial substitution rate.

    PubMed

    Gratton, P; Konopiński, M K; Sbordoni, V

    2008-10-01

    Genetic data are currently providing a large amount of new information on past distribution of species and are contributing to a new vision of Pleistocene ice ages. Nonetheless, an increasing number of studies on the 'time dependency' of mutation rates suggest that date assessments for evolutionary events of the Pleistocene might be overestimated. We analysed mitochondrial (mt) DNA (COI) sequence variation in 225 Parnassius mnemosyne individuals sampled across central and eastern Europe in order to assess (i) the existence of genetic signatures of Pleistocene climate shifts; and (ii) the timescale of demographic and evolutionary events. Our analyses reveal a phylogeographical pattern markedly influenced by the Pleistocene/Holocene climate shifts. Eastern Alpine and Balkan populations display comparatively high mtDNA diversity, suggesting multiple glacial refugia. On the other hand, three widely distributed and spatially segregated lineages occupy most of northern and eastern Europe, indicating postglacial recolonization from different refugial areas. We show that a conventional 'phylogenetic' substitution rate cannot account for the present distribution of genetic variation in this species, and we combine phylogeographical pattern and palaeoecological information in order to determine a suitable intraspecific rate through a Bayesian coalescent approach. We argue that our calibrated 'time-dependent' rate (0.096 substitutions/ million years), offers the most convincing time frame for the evolutionary events inferred from sequence data. When scaled by the new rate, estimates of divergence between Balkan and Alpine lineages point to c. 19 000 years before present (last glacial maximum), and parameters of demographic expansion for northern lineages are consistent with postglacial warming (5-11 000 years before present).

  6. A dependence modelling study of extreme rainfall in Madeira Island

    NASA Astrophysics Data System (ADS)

    Gouveia-Reis, Délia; Guerreiro Lopes, Luiz; Mendonça, Sandra

    2016-08-01

    The dependence between variables plays a central role in multivariate extremes. In this paper, spatial dependence of Madeira Island's rainfall data is addressed within an extreme value copula approach through an analysis of maximum annual data. The impact of altitude, slope orientation, distance between rain gauge stations and distance from the stations to the sea are investigated for two different periods of time. The results obtained highlight the influence of the island's complex topography on the spatial distribution of extreme rainfall in Madeira Island.

  7. A note on some statistical properties of rise time parameters used in muon arrival time measurements

    NASA Technical Reports Server (NTRS)

    Vanderwalt, D. J.; Devilliers, E. J.

    1985-01-01

    Most investigations of the muon arrival time distribution in EAS during the past decade made use of parameters which can collectively be called rise time parameters. The rise time parameter T sub A/B is defined as the time taken for the integrated pulse from a detector to rise from A% to B% of its full amplitude. The use of these parameters are usually restricted to the determination of the radial dependence thereof. This radial dependence of the rise time parameters are usually taken as a signature of the particle interaction characteristics in the shower. As these parameters have a stochastic nature, it seems reasonable that one should also take notice of this aspect of the rise time parameters. A statistical approach to rise time parameters is presented.

  8. Assessment of veterinary drugs in plants using pharmacokinetic approaches: The absorption, distribution and elimination of tetracycline and sulfamethoxazole in ephemeral vegetables

    PubMed Central

    Chen, Hui-Ru; Rairat, Tirawat; Loh, Shih-Hurng; Wu, Yu-Chieh; Vickroy, Thomas W.

    2017-01-01

    The present study was carried out to demonstrate novel use of pharmacokinetic approaches to characterize drug behaviors/movements in the vegetables with implications to food safety. The absorption, distribution, metabolism and most importantly, the elimination of tetracycline (TC) and sulfamethoxazole (SMX) in edible plants Brassica rapa chinensis and Ipomoea aquatica grown hydroponically were demonstrated and studied using non-compartmental pharmacokinetic analysis. The results revealed drug-dependent and vegetable-dependent pharmacokinetic differences and indicated that ephemeral vegetables could have high capacity accumulating antibiotics (up to 160 μg g-1 for TC and 38 μg g-1 for SMX) within hours. TC concentration in the root (Cmax) could reach 11 times higher than that in the cultivation fluid and 3–28 times higher than the petioles/stems. Based on the volume of distribution (Vss), SMX was 3–6 times more extensively distributed than TC. Both antibiotics showed evident, albeit slow elimination phase with elimination half-lives ranging from 22 to 88 hours. For the first time drug elimination through the roots of a plant was demonstrated, and by viewing the root as a central compartment and continuous infusion without a loading dose as drug administration mode, it is possible to pharmacokinetically monitor the movement of antibiotics and their fate in the vegetables with more detailed information not previously available. Phyto-pharmacokinetic could be a new area worth developing new models for the assessment of veterinary drugs in edible plants. PMID:28797073

  9. A missing dimension in measures of vaccination impacts

    USGS Publications Warehouse

    Gomes, M. Gabriela M.; Lipsitch, Marc; Wargo, Andrew R.; Kurath, Gael; Rebelo, Carlota; Medley, Graham F.; Coutinho, Antonio

    2013-01-01

    Immunological protection, acquired from either natural infection or vaccination, varies among hosts, reflecting underlying biological variation and affecting population-level protection. Owing to the nature of resistance mechanisms, distributions of susceptibility and protection entangle with pathogen dose in a way that can be decoupled by adequately representing the dose dimension. Any infectious processes must depend in some fashion on dose, and empirical evidence exists for an effect of exposure dose on the probability of transmission to mumps-vaccinated hosts [1], the case-fatality ratio of measles [2], and the probability of infection and, given infection, of symptoms in cholera [3]. Extreme distributions of vaccine protection have been termed leaky (partially protects all hosts) and all-or-nothing (totally protects a proportion of hosts) [4]. These distributions can be distinguished in vaccine field trials from the time dependence of infections [5]. Frailty mixing models have also been proposed to estimate the distribution of protection from time to event data [6], [7], although the results are not comparable across regions unless there is explicit control for baseline transmission [8]. Distributions of host susceptibility and acquired protection can be estimated from dose-response data generated under controlled experimental conditions [9]–[11] and natural settings [12], [13]. These distributions can guide research on mechanisms of protection, as well as enable model validity across the entire range of transmission intensities. We argue for a shift to a dose-dimension paradigm in infectious disease science and community health.

  10. An estimation of distribution method for infrared target detection based on Copulas

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Zhang, Yiqun

    2015-10-01

    Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.

  11. Asymptotic Distributions of Coalescence Times and Ancestral Lineage Numbers for Populations with Temporally Varying Size

    PubMed Central

    Chen, Hua; Chen, Kun

    2013-01-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939

  12. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    PubMed

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  13. Single-photon semiconductor photodiodes for distributed optical fiber sensors: state of the art and perspectives

    NASA Astrophysics Data System (ADS)

    Ripamonti, Giancarlo; Lacaita, Andrea L.

    1993-03-01

    The extreme sensitivity and time resolution of Geiger-mode avalanche photodiodes (GM- APDs) have already been exploited for optical time domain reflectometry (OTDR). Better than 1 cm spatial resolution in Rayleigh scattering detection was demonstrated. Distributed and quasi-distributed optical fiber sensors can take advantage of the capabilities of GM-APDs. Extensive studies have recently disclosed the main characteristics and limitations of silicon devices, both commercially available and developmental. In this paper we report an analysis of the performance of these detectors. The main characteristics of GM-APDs of interest for distributed optical fiber sensors are briefly reviewed. Command electronics (active quenching) is then introduced. The detector timing performance sets the maximum spatial resolution in experiments employing OTDR techniques. We highlight that the achievable time resolution depends on the physics of the avalanche spreading over the device area. On the basis of these results, trade-off between the important parameters (quantum efficiency, time resolution, background noise, and afterpulsing effects) is considered. Finally, we show first results on Germanium devices, capable of single photon sensitivity at 1.3 and 1.5 micrometers with sub- nanosecond time resolution.

  14. FDR doesn't Tell the Whole Story: Joint Influence of Effect Size and Covariance Structure on the Distribution of the False Discovery Proportions

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    As part of a 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report results of simulations that estimated the false discovery rate (FDR) for equally correlated test statistics using a well-known multiple-test procedure. In our study we estimate the distribution of the false discovery proportion (FDP) for the same procedure under a variety of correlation structures among multiple dependent variables in a MANOVA context. Specifically, we study the mean (the FDR), skewness, kurtosis, and percentiles of the FDP distribution in the case of multiple comparisons that give rise to correlated non-central t-statistics when results at several time periods are being compared to baseline. Even if the FDR achieves its nominal value, other aspects of the distribution of the FDP depend on the interaction between signed effect sizes and correlations among variables, proportion of true nulls, and number of dependent variables. We show examples where the mean FDP (the FDR) is 10% as designed, yet there is a surprising probability of having 30% or more false discoveries. Thus, in a real experiment, the proportion of false discoveries could be quite different from the stipulated FDR.

  15. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urniezius, Renaldas

    2011-03-14

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigidmore » body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.« less

  16. Calcite dissolution rate spectra measured by in situ digital holographic microscopy.

    PubMed

    Brand, Alexander S; Feng, Pan; Bullard, Jeffrey W

    2017-09-01

    Digital holographic microscopy in reflection mode is used to track in situ , real-time nanoscale topography evolution of cleaved (104) calcite surfaces exposed to flowing or static deionized water. The method captures full-field holograms of the surface at frame rates of up to 12.5 s -1 . Numerical reconstruction provides 3D surface topography with vertical resolution of a few nanometers and enables measurement of time-dependent local dissolution fluxes. A statistical distribution, or spectrum, of dissolution rates is generated by sampling multiple area domains on multiple crystals. The data show, as has been demonstrated by Fischer et al. (2012), that dissolution is most fully described by a rate spectrum, although the modal dissolution rate agrees well with published mean dissolution rates ( e.g. , 0.1 µmol m -2 s -1 to 0.3 µmol m -2 s -1 ). Rhombohedral etch pits and other morphological features resulting from rapid local dissolution appear at different times and are heterogeneously distributed across the surface and through the depth. This makes the distribution in rates measured on a single crystal dependent both on the sample observation field size and on time, even at nominally constant undersaturation. Statistical analysis of the inherent noise in the DHM measurements indicates that the technique is robust and that it likely can be applied to quantify and interpret rate spectra for the dissolution or growth of other minerals.

  17. Calcite dissolution rate spectra measured by in situ digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Brand, Alexander S.; Feng, Pan; Bullard, Jeffrey W.

    2017-09-01

    Digital holographic microscopy in reflection mode is used to track in situ, real-time nanoscale topography evolution of cleaved (104) calcite surfaces exposed to flowing or static deionized water. The method captures full-field holograms of the surface at frame rates of up to 12.5 s-1. Numerical reconstruction provides 3D surface topography with vertical resolution of a few nanometers and enables measurement of time-dependent local dissolution fluxes. A statistical distribution, or spectrum, of dissolution rates is generated by sampling multiple area domains on multiple crystals. The data show, as has been demonstrated by Fischer et al. (2012), that dissolution is most fully described by a rate spectrum, although the modal dissolution rate agrees well with published mean dissolution rates (e.g., 0.1 μmol m-2 s-1 to 0.3 μmol m-2 s-1). Rhombohedral etch pits and other morphological features resulting from rapid local dissolution appear at different times and are heterogeneously distributed across the surface and through the depth. This makes the distribution in rates measured on a single crystal dependent both on the sample observation field size and on time, even at nominally constant undersaturation. Statistical analysis of the inherent noise in the DHM measurements indicates that the technique is robust and that it likely can be applied to quantify and interpret rate spectra for the dissolution or growth of other minerals.

  18. Evaluation of the pre-posterior distribution of optimized sampling times for the design of pharmacokinetic studies.

    PubMed

    Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John

    2012-01-01

    Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.

  19. Approach to thermal equilibrium in atomic collisions.

    PubMed

    Zhang, P; Kharchenko, V; Dalgarno, A; Matsumi, Y; Nakayama, T; Takahashi, K

    2008-03-14

    The energy relaxation of fast atoms moving in a thermal bath gas is explored experimentally and theoretically. Two time scales characterize the equilibration, one a short time, in which the isotropic energy distribution profile relaxes to a Maxwellian shape at some intermediate effective temperature, and the second, a longer time in which the relaxation preserves a Maxwellian distribution and its effective temperature decreases continuously to the bath gas temperature. The formation and preservation of a Maxwellian distribution does not depend on the projectile to bath gas atom mass ratio. This two-stage behavior arises due to the dominance of small angle scattering and small energy transfer in the collisions of neutral particles. Measurements of the evolving Doppler profiles of emission from excited initially energetic nitrogen atoms traversing bath gases of helium and argon confirm the theoretical predictions.

  20. Gravitational lensing, time delay, and gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Mao, Shude

    1992-01-01

    The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.

  1. Analysis of electrophoresis performance

    NASA Technical Reports Server (NTRS)

    Roberts, G. O.

    1984-01-01

    The SAMPLE computer code models electrophoresis separation in a wide range of conditions. Results are included for steady three dimensional continuous flow electrophoresis (CFE), time dependent gel and acetate film experiments in one or two dimensions and isoelectric focusing in one dimension. The code evolves N two dimensional radical concentration distributions in time, or distance down a CFE chamber. For each time or distance increment, there are six stages, successively obtaining the pH distribution, the corresponding degrees of ionization for each radical, the conductivity, the electric field and current distribution, and the flux components in each direction for each separate radical. The final stage is to update the radical concentrations. The model formulation for ion motion in an electric field ignores activity effects, and is valid only for low concentrations; for larger concentrations the conductivity is, therefore, also invalid.

  2. Computing the Expected Cost of an Appointment Schedule for Statistically Identical Customers with Probabilistic Service Times

    PubMed Central

    Dietz, Dennis C.

    2014-01-01

    A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070

  3. Understanding Intense Laser Interactions with Solid Density Plasma

    DTIC Science & Technology

    2017-01-04

    obtain the time-dependent diffraction efficiency. Further improvements may lead to femtosecond temporal resolution, with negligible pump-probe jitter...with negligible pump-probe jitter being possible with future laser- wakefield-accelerator ultrafast-electron-diffraction schemes. Distribution

  4. EVALUATION OF VENTILATION PERFORMANCE FOR INDOOR SPACE

    EPA Science Inventory

    The paper discusses a personal-computer-based application of computational fluid dynamics that can be used to determine the turbulent flow field and time-dependent/steady-state contaminant concentration distributions within isothermal indoor space. (NOTE: Ventilation performance ...

  5. Power Hardware-in-the-Loop Testing of Multiple Photovoltaic Inverters' Volt-Var Control with Real-Time Grid Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Sudipta; Nelson, Austin; Hoke, Anderson

    2016-12-12

    Traditional testing methods fall short in evaluating interactions between multiple smart inverters providing advanced grid support functions due to the fact that such interactions largely depend on their placements on the electric distribution systems with impedances between them. Even though significant concerns have been raised by the utilities on the effects of such interactions, little effort has been made to evaluate them. In this paper, power hardware-in-the-loop (PHIL) based testing was utilized to evaluate autonomous volt-var operations of multiple smart photovoltaic (PV) inverters connected to a simple distribution feeder model. The results provided in this paper show that depending onmore » volt-var control (VVC) parameters and grid parameters, interaction between inverters and between the inverter and the grid is possible in some extreme cases with very high VVC slopes, fast response times and large VVC response delays.« less

  6. INFRARED OBSERVATIONAL MANIFESTATIONS OF YOUNG DUSTY SUPER STAR CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-González, Sergio; Tenorio-Tagle, Guillermo; Silich, Sergiy, E-mail: sergiomtz@inaoep.mx

    The growing evidence pointing at core-collapse supernovae as large dust producers makes young massive stellar clusters ideal laboratories to study the evolution of dust immersed in a hot plasma. Here we address the stochastic injection of dust by supernovae, and follow its evolution due to thermal sputtering within the hot and dense plasma generated by young stellar clusters. Under these considerations, dust grains are heated by means of random collisions with gas particles which result in the appearance of  infrared spectral signatures. We present time-dependent infrared spectral energy distributions that are to be expected from young stellar clusters. Our results aremore » based on hydrodynamic calculations that account for the stochastic injection of dust by supernovae. These also consider gas and dust radiative cooling, stochastic dust temperature fluctuations, the exit of dust grains out of the cluster volume due to the cluster wind, and a time-dependent grain size distribution.« less

  7. The nitrate response of a lowland catchment and groundwater travel times

    NASA Astrophysics Data System (ADS)

    van der Velde, Ype; Rozemeijer, Joachim; de Rooij, Gerrit; van Geer, Frans

    2010-05-01

    Intensive agriculture in lowland catchments causes eutrophication of downstream waters. To determine effective measures to reduce the nutrient loads from upstream lowland catchments, we need to understand the origin of long-term and daily variations in surface water nutrient concentrations. Surface water concentrations are often linked to travel time distributions of water passing through the saturated and unsaturated soil of the contributing catchment. This distribution represents the contact time over which sorption, desorption and degradation takes place. However, travel time distributions are strongly influenced by processes like tube drain flow, overland flow and the dynamics of draining ditches and streams and therefore exhibit strong daily and seasonal variations. The study we will present is situated in the 6.6 km2 Hupsel brook catchment in The Netherlands. In this catchment nitrate and chloride concentrations have been intensively monitored for the past 26 years under steadily decreasing agricultural inputs. We described the complicated dynamics of subsurface water fluxes as streams, ditches and tube drains locally switch between active or passive depending on the ambient groundwater level by a groundwater model with high spatial and temporal resolutions. A transient particle tracking approach is used to derive a unique catchment-scale travel time distribution for each day during the 26 year model period. These transient travel time distributions are not smooth distributions, but distributions that are strongly spiked reflecting the contribution of past rainfall events to the current discharge. We will show that a catchment-scale mass response function approach that only describes catchment-scale mixing and degradation suffices to accurately reproduce observed chloride and nitrate surface water concentrations as long as the mass response functions include the dynamics of travel time distributions caused by the highly variable connectivity of the surface water network.

  8. Electron pitch angle distributions throughout the magnetosphere as observed on Ogo 5.

    NASA Technical Reports Server (NTRS)

    West, H. I., Jr.; Buck, R. M.; Walton, J. R.

    1973-01-01

    A survey of the equatorial pitch angle distributions of energetic electrons is provided for all local times out to radial distances of 20 earth radii on the night side of the earth and to the magnetopause on the day side of the earth. In much of the inner magnetosphere and in the outer magnetosphere on the day side of the earth, the normal loss cone distribution prevails. The effects of drift shell splitting - i.e., the appearance of pitch angle distributions with minimums at 90 deg, called butterfly distributions - become apparent in the early afternoon magnetosphere at extended distances, and the distribution is observed in to 5.5 earth radii in the nighttime magnetosphere. Inside about 9 earth radii the pitch angle effects are quite energy-dependent. Beyond about 9 earth radii in the premidnight magnetosphere during quiet times the butterfly distribution is often observed. It is shown that these electrons cannot survive a drift to dawn without being considerably modified. The role of substorm activity in modifying these distributions is identified.

  9. Scale Dependence of Spatiotemporal Intermittence of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Siddani, Ravi K.

    2011-01-01

    It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.

  10. TRUMP. Transient & S-State Temperature Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1992-03-03

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  11. Orthogonal recursive bisection data decomposition for high performance computing in cardiac model simulations: dependence on anatomical geometry.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U J; Seemann, Gunnar; Dossel, Olaf; Pitman, Michael C; Rice, John J

    2009-01-01

    Orthogonal recursive bisection (ORB) algorithm can be used as data decomposition strategy to distribute a large data set of a cardiac model to a distributed memory supercomputer. It has been shown previously that good scaling results can be achieved using the ORB algorithm for data decomposition. However, the ORB algorithm depends on the distribution of computational load of each element in the data set. In this work we investigated the dependence of data decomposition and load balancing on different rotations of the anatomical data set to achieve optimization in load balancing. The anatomical data set was given by both ventricles of the Visible Female data set in a 0.2 mm resolution. Fiber orientation was included. The data set was rotated by 90 degrees around x, y and z axis, respectively. By either translating or by simply taking the magnitude of the resulting negative coordinates we were able to create 14 data set of the same anatomy with different orientation and position in the overall volume. Computation load ratios for non - tissue vs. tissue elements used in the data decomposition were 1:1, 1:2, 1:5, 1:10, 1:25, 1:38.85, 1:50 and 1:100 to investigate the effect of different load ratios on the data decomposition. The ten Tusscher et al. (2004) electrophysiological cell model was used in monodomain simulations of 1 ms simulation time to compare performance using the different data sets and orientations. The simulations were carried out for load ratio 1:10, 1:25 and 1:38.85 on a 512 processor partition of the IBM Blue Gene/L supercomputer. Th results show that the data decomposition does depend on the orientation and position of the anatomy in the global volume. The difference in total run time between the data sets is 10 s for a simulation time of 1 ms. This yields a difference of about 28 h for a simulation of 10 s simulation time. However, given larger processor partitions, the difference in run time decreases and becomes less significant. Depending on the processor partition size, future work will have to consider the orientation of the anatomy in the global volume for longer simulation runs.

  12. Lognormal field size distributions as a consequence of economic truncation

    USGS Publications Warehouse

    Attanasi, E.D.; Drew, L.J.

    1985-01-01

    The assumption of lognormal (parent) field size distributions has for a long time been applied to resource appraisal and evaluation of exploration strategy by the petroleum industry. However, frequency distributions estimated with observed data and used to justify this hypotheses are conditional. Examination of various observed field size distributions across basins and over time shows that such distributions should be regarded as the end result of an economic filtering process. Commercial discoveries depend on oil and gas prices and field development costs. Some new fields are eliminated due to location, depths, or water depths. This filtering process is called economic truncation. Economic truncation may occur when predictions of a discovery process are passed through an economic appraisal model. We demonstrate that (1) economic resource appraisals, (2) forecasts of levels of petroleum industry activity, and (3) expected benefits of developing and implementing cost reducing technology are sensitive to assumptions made about the nature of that portion of (parent) field size distribution subject to economic truncation. ?? 1985 Plenum Publishing Corporation.

  13. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  14. NMR relaxation in natural soils: Fast Field Cycling and T1-T2 Determination by IR-MEMS

    NASA Astrophysics Data System (ADS)

    Haber-Pohlmeier, S.; Pohlmeier, A.; Stapf, S.; van Dusschoten, D.

    2009-04-01

    Soils are natural porous media of highest importance for food production and sustainment of water resources. For these functions, prominent properties are their ability of water retainment and transport, which are mainly controlled by pore size distribution. The latter is related to NMR relaxation times of water molecules, of which the longitudinal relaxation time can be determined non-invasively by fast-field cycling relaxometry (FFC) and both are obtainable by inversion recovery - multi-echo- imaging (IR-MEMS) methods. The advantage of the FFC method is the determination of the field dependent dispersion of the spin-lattice relaxation rate, whereas MRI at high field is capable of yielding spatially resolved T1 and T2 times. Here we present results of T1- relaxation time distributions of water in three natural soils, obtained by the analysis of FFC data by means of the inverse Laplace transformation (CONTIN)1. Kaldenkirchen soil shows relatively broad bimodal distribution functions D(T1) which shift to higher relaxation rates with increasing relaxation field. These data are compared to spatially resolved T1- and T2 distributions, obtained by IR-MEMS. The distribution of T1 corresponds well to that obtained by FFC.

  15. Prediction-based dynamic load-sharing heuristics

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Devarakonda, Murthy; Iyer, Ravishankar K.

    1993-01-01

    The authors present dynamic load-sharing heuristics that use predicted resource requirements of processes to manage workloads in a distributed system. A previously developed statistical pattern-recognition method is employed for resource prediction. While nonprediction-based heuristics depend on a rapidly changing system status, the new heuristics depend on slowly changing program resource usage patterns. Furthermore, prediction-based heuristics can be more effective since they use future requirements rather than just the current system state. Four prediction-based heuristics, two centralized and two distributed, are presented. Using trace driven simulations, they are compared against random scheduling and two effective nonprediction based heuristics. Results show that the prediction-based centralized heuristics achieve up to 30 percent better response times than the nonprediction centralized heuristic, and that the prediction-based distributed heuristics achieve up to 50 percent improvements relative to their nonprediction counterpart.

  16. A common stochastic accumulator with effector-dependent noise can explain eye-hand coordination

    PubMed Central

    Gopal, Atul; Viswanathan, Pooja

    2015-01-01

    The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:25568161

  17. The light-front gauge-invariant energy-momentum tensor

    DOE PAGES

    Lorce, Cedric

    2015-08-11

    In this study, we provide for the first time a complete parametrization for the matrix elements of the generic asymmetric, non-local and gauge-invariant canonical energy-momentum tensor, generalizing therefore former works on the symmetric, local and gauge-invariant kinetic energy-momentum tensor also known as the Belinfante-Rosenfeld energy-momentum tensor. We discuss in detail the various constraints imposed by non-locality, linear and angular momentum conservation. We also derive the relations with two-parton generalized and transverse-momentum dependent distributions, clarifying what can be learned from the latter. In particular, we show explicitly that two-parton transverse-momentum dependent distributions cannot provide any model-independent information about the parton orbitalmore » angular momentum. On the way, we recover the Burkardt sum rule and obtain similar new sum rules for higher-twist distributions.« less

  18. Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method

    NASA Astrophysics Data System (ADS)

    Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee

    2013-03-01

    Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.

  19. Distribution of the anticancer drugs doxorubicin, mitoxantrone and topotecan in tumors and normal tissues.

    PubMed

    Patel, Krupa J; Trédan, Olivier; Tannock, Ian F

    2013-07-01

    Pharmacokinetic analyses estimate the mean concentration of drug within a given tissue as a function of time, but do not give information about the spatial distribution of drugs within that tissue. Here, we compare the time-dependent spatial distribution of three anticancer drugs within tumors, heart, kidney, liver and brain. Mice bearing various xenografts were treated with doxorubicin, mitoxantrone or topotecan. At various times after injection, tumors and samples of heart, kidney, liver and brain were excised. Within solid tumors, the distribution of doxorubicin, mitoxantrone and topotecan was limited to perivascular regions at 10 min after administration and the distance from blood vessels at which drug intensity fell to half was ~25-75 μm. Although drug distribution improved after 3 and 24 h, there remained a significant decrease in drug fluorescence with increasing distance from tumor blood vessels. Drug distribution was relatively uniform in the heart, kidney and liver with substantially greater perivascular drug uptake than in tumors. There was significantly higher total drug fluorescence in the liver than in tumors after 10 min, 3 and 24 h. Little to no drug fluorescence was observed in the brain. There are marked differences in the spatial distributions of three anticancer drugs within tumor tissue and normal tissues over time, with greater exposure to most normal tissues and limited drug distribution to many cells in tumors. Studies of the spatial distribution of drugs are required to complement pharmacokinetic data in order to better understand and predict drug effects and toxicities.

  20. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  1. Maximum Entropy Principle for Transportation

    NASA Astrophysics Data System (ADS)

    Bilich, F.; DaSilva, R.

    2008-11-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  2. Non-linear Min protein interactions generate harmonics that signal mid-cell division in Escherichia coli

    PubMed Central

    Walsh, James C.; Angstmann, Christopher N.; Duggin, Iain G.

    2017-01-01

    The Min protein system creates a dynamic spatial pattern in Escherichia coli cells where the proteins MinD and MinE oscillate from pole to pole. MinD positions MinC, an inhibitor of FtsZ ring formation, contributing to the mid-cell localization of cell division. In this paper, Fourier analysis is used to decompose experimental and model MinD spatial distributions into time-dependent harmonic components. In both experiment and model, the second harmonic component is responsible for producing a mid-cell minimum in MinD concentration. The features of this harmonic are robust in both experiment and model. Fourier analysis reveals a close correspondence between the time-dependent behaviour of the harmonic components in the experimental data and model. Given this, each molecular species in the model was analysed individually. This analysis revealed that membrane-bound MinD dimer shows the mid-cell minimum with the highest contrast when averaged over time, carrying the strongest signal for positioning the cell division ring. This concurs with previous data showing that the MinD dimer binds to MinC inhibiting FtsZ ring formation. These results show that non-linear interactions of Min proteins are essential for producing the mid-cell positioning signal via the generation of second-order harmonic components in the time-dependent spatial protein distribution. PMID:29040283

  3. Smooth DNA Transport through a Narrowed Pore Geometry

    PubMed Central

    Carson, Spencer; Wilson, James; Aksimentiev, Aleksei; Wanunu, Meni

    2014-01-01

    Voltage-driven transport of double-stranded DNA through nanoscale pores holds much potential for applications in quantitative molecular biology and biotechnology, yet the microscopic details of translocation have proven to be challenging to decipher. Earlier experiments showed strong dependence of transport kinetics on pore size: fast regular transport in large pores (> 5 nm diameter), and slower yet heterogeneous transport time distributions in sub-5 nm pores, which imply a large positional uncertainty of the DNA in the pore as a function of the translocation time. In this work, we show that this anomalous transport is a result of DNA self-interaction, a phenomenon that is strictly pore-diameter dependent. We identify a regime in which DNA transport is regular, producing narrow and well-behaved dwell-time distributions that fit a simple drift-diffusion theory. Furthermore, a systematic study of the dependence of dwell time on DNA length reveals a single power-law scaling of 1.37 in the range of 35–20,000 bp. We highlight the resolution of our nanopore device by discriminating via single pulses 100 and 500 bp fragments in a mixture with >98% accuracy. When coupled to an appropriate sequence labeling method, our observation of smooth DNA translocation can pave the way for high-resolution DNA mapping and sizing applications in genomics. PMID:25418307

  4. Photochemical predictive analysis of photodynamic therapy with non-homogeneous topical photosensitizer distribution in dermatological applications

    NASA Astrophysics Data System (ADS)

    Salas-García, I.; Fanjul-Vélez, F.; Ortega-Quijano, N.; López-Escobar, M.; Arce-Diego, J. L.

    2010-04-01

    Photodynamic Therapy (PDT) is a therapeutic technique widely used in dermatology to treat several skin pathologies. It is based in topical or systemic delivery of photosensitizing drugs followed by irradiation with visible light. The subsequent photochemical reactions generate reactive oxygen species which are considered the principal cytotoxic agents to induce cell necrosis. In this work we present a PDT model that tries to predict the photodynamic effect on the skin with a topically administered photosensitizer. The time dependent inhomogeneous distribution of the photoactive compound protoporphyrin IX (PpIX) is calculated after obtaining its precursor distribution (Methyl aminolevulinate, MAL) which depends on the drug permeability, diffusion properties of the skin, incubation time and conversion efficiency of MAL to PpIX. Once the optical energy is obtained by means of the Beer Lambert law, a photochemical model is employed to estimate the concentration of the different molecular compounds taking into account the electronic transitions between molecular levels and particles concentrations. The results obtained allow us to know the evolution of the cytotoxic agent in order to estimate the necrotic area adjusting parameters such as the optical power, the photosensitizer concentration, the incubation and exposition time or the diffusivity and permeability of the tissue.

  5. Ar 3p photoelectron sideband spectra in two-color XUV + NIR laser fields

    NASA Astrophysics Data System (ADS)

    Minemoto, Shinichirou; Shimada, Hiroyuki; Komatsu, Kazma; Komatsubara, Wataru; Majima, Takuya; Mizuno, Tomoya; Owada, Shigeki; Sakai, Hirofumi; Togashi, Tadashi; Yoshida, Shintaro; Yabashi, Makina; Yagishita, Akira

    2018-04-01

    We performed photoelectron spectroscopy using femtosecond XUV pulses from a free-electron laser and femtosecond near-infrared pulses from a synchronized laser, and succeeded in measuring Ar 3p photoelectron sideband spectra due to the two-color above-threshold ionization. In our calculations of the first-order time-dependent perturbation theoretical model based on the strong field approximation, the photoelectron sideband spectra and their angular distributions are well reproduced by considering the timing jitter between the XUV and the NIR pulses, showing that the timing jitter in our experiments was distributed over the width of {1.0}+0.4-0.2 ps. The present approach can be used as a method to evaluate the timing jitter inevitable in FEL experiments.

  6. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  7. Cell-size distribution in epithelial tissue formation and homeostasis

    PubMed Central

    Primo, Luca; Celani, Antonio

    2017-01-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. PMID:28330988

  8. Cell-size distribution in epithelial tissue formation and homeostasis.

    PubMed

    Puliafito, Alberto; Primo, Luca; Celani, Antonio

    2017-03-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size. © 2017 The Author(s).

  9. Cumulative slant path rain attenuation associated with COMSTAR beacon at 28.56 GHz for Wallops Island, Virginia

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1978-01-01

    Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.

  10. Fractal scaling analysis of groundwater dynamics in confined aquifers

    NASA Astrophysics Data System (ADS)

    Tu, Tongbi; Ercan, Ali; Kavvas, M. Levent

    2017-10-01

    Groundwater closely interacts with surface water and even climate systems in most hydroclimatic settings. Fractal scaling analysis of groundwater dynamics is of significance in modeling hydrological processes by considering potential temporal long-range dependence and scaling crossovers in the groundwater level fluctuations. In this study, it is demonstrated that the groundwater level fluctuations in confined aquifer wells with long observations exhibit site-specific fractal scaling behavior. Detrended fluctuation analysis (DFA) was utilized to quantify the monofractality, and multifractal detrended fluctuation analysis (MF-DFA) and multiscale multifractal analysis (MMA) were employed to examine the multifractal behavior. The DFA results indicated that fractals exist in groundwater level time series, and it was shown that the estimated Hurst exponent is closely dependent on the length and specific time interval of the time series. The MF-DFA and MMA analyses showed that different levels of multifractality exist, which may be partially due to a broad probability density distribution with infinite moments. Furthermore, it is demonstrated that the underlying distribution of groundwater level fluctuations exhibits either non-Gaussian characteristics, which may be fitted by the Lévy stable distribution, or Gaussian characteristics depending on the site characteristics. However, fractional Brownian motion (fBm), which has been identified as an appropriate model to characterize groundwater level fluctuation, is Gaussian with finite moments. Therefore, fBm may be inadequate for the description of physical processes with infinite moments, such as the groundwater level fluctuations in this study. It is concluded that there is a need for generalized governing equations of groundwater flow processes that can model both the long-memory behavior and the Brownian finite-memory behavior.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoth, Gregory W., E-mail: gregory.hoth@nist.gov; Pelle, Bruno; Riedl, Stefan

    We demonstrate a two axis gyroscope by the use of light pulse atom interferometry with an expanding cloud of atoms in the regime where the cloud has expanded by 1.1–5 times its initial size during the interrogation. Rotations are measured by analyzing spatial fringe patterns in the atom population obtained by imaging the final cloud. The fringes arise from a correlation between an atom's initial velocity and its final position. This correlation is naturally created by the expansion of the cloud, but it also depends on the initial atomic distribution. We show that the frequency and contrast of these spatialmore » fringes depend on the details of the initial distribution and develop an analytical model to explain this dependence. We also discuss several challenges that must be overcome to realize a high-performance gyroscope with this technique.« less

  12. Time-frequency analysis of SEMG--with special consideration to the interelectrode spacing.

    PubMed

    Alemu, M; Kumar, Dinesh Kant; Bradley, Alan

    2003-12-01

    The surface electromyogram (SEMG) is a complex, nonstationary signal. The spectrum of the SEMG is dependent on the force of contraction being generated and other factors like muscle fatigue and interelectrode distance (IED). The spectrum of the signal is time variant. This paper reports the experimental research conducted to study the influence of force of muscle contraction and IED on the SEMG signal using time-frequency (T-F) analysis. Two T-F techniques have been used: Wigner-Ville distribution (WVD) and Choi-Williams distribution (CWD). The experiment was conducted with the help of ten healthy volunteers (five males and five females) who performed isometric elbow flexions of the active right arm at 20%, 50%, and 80% of their maximal voluntary contraction. The SEMG signal was recorded using surface electrodes placed at a distance of 18 and 36 mm over biceps brachii muscle. The results indicate that the two distributions were spread out across the frequency range at smaller IED. Further, regardless of the spacing, both distributions displayed increased spectral compression with time at higher contraction level.

  13. Rank Diversity of Languages: Generic Behavior in Computational Linguistics

    PubMed Central

    Cocho, Germinal; Flores, Jorge; Gershenson, Carlos; Pineda, Carlos; Sánchez, Sergio

    2015-01-01

    Statistical studies of languages have focused on the rank-frequency distribution of words. Instead, we introduce here a measure of how word ranks change in time and call this distribution rank diversity. We calculate this diversity for books published in six European languages since 1800, and find that it follows a universal lognormal distribution. Based on the mean and standard deviation associated with the lognormal distribution, we define three different word regimes of languages: “heads” consist of words which almost do not change their rank in time, “bodies” are words of general use, while “tails” are comprised by context-specific words and vary their rank considerably in time. The heads and bodies reflect the size of language cores identified by linguists for basic communication. We propose a Gaussian random walk model which reproduces the rank variation of words in time and thus the diversity. Rank diversity of words can be understood as the result of random variations in rank, where the size of the variation depends on the rank itself. We find that the core size is similar for all languages studied. PMID:25849150

  14. Rank diversity of languages: generic behavior in computational linguistics.

    PubMed

    Cocho, Germinal; Flores, Jorge; Gershenson, Carlos; Pineda, Carlos; Sánchez, Sergio

    2015-01-01

    Statistical studies of languages have focused on the rank-frequency distribution of words. Instead, we introduce here a measure of how word ranks change in time and call this distribution rank diversity. We calculate this diversity for books published in six European languages since 1800, and find that it follows a universal lognormal distribution. Based on the mean and standard deviation associated with the lognormal distribution, we define three different word regimes of languages: "heads" consist of words which almost do not change their rank in time, "bodies" are words of general use, while "tails" are comprised by context-specific words and vary their rank considerably in time. The heads and bodies reflect the size of language cores identified by linguists for basic communication. We propose a Gaussian random walk model which reproduces the rank variation of words in time and thus the diversity. Rank diversity of words can be understood as the result of random variations in rank, where the size of the variation depends on the rank itself. We find that the core size is similar for all languages studied.

  15. Nonlogarithmic magnetization relaxation at the initial time intervals and magnetic field dependence of the flux creep rate in Bi2Sr2Ca(sub I)Cu2Ox single crystals

    NASA Technical Reports Server (NTRS)

    Moshchalcov, V. V.; Zhukov, A. A.; Kuznetzov, V. D.; Metlushko, V. V.; Leonyuk, L. I.

    1990-01-01

    At the initial time intervals, preceding the thermally activated flux creep regime, fast nonlogarithmic relaxation is found. The fully magnetic moment Pm(t) relaxation curve is shown. The magnetic measurements were made using SQUID-magnetometer. Two different relaxation regimes exist. The nonlogarithmic relaxation for the initial time intervals may be related to the viscous Abrikosov vortices flow with j is greater than j(sub c) for high enough temperature T and magnetic field induction B. This assumption correlates with Pm(t) measurements. The characteristic time t(sub O) separating two different relaxation regimes decreases as temperature and magnetic field are lowered. The logarithmic magnetization relaxation curves Pm(t) for fixed temperature and different external magnetic field inductions B are given. The relaxation rate dependence on magnetic field, R(B) = dPm(B, T sub O)/d(1nt) has a sharp maximum which is similar to that found for R(T) temperature dependences. The maximum shifts to lower fields as temperature goes up. The observed sharp maximum is related to a topological transition in shielding critical current distribution and, consequently, in Abrikosov vortices density. The nonlogarithmic magnetization relaxation for the initial time intervals is found. This fast relaxation has almost an exponentional character. The sharp relaxation rate R(B) maximum is observed. This maximum corresponds to a topological transition in Abrikosov vortices distribution.

  16. Splash detail due to a single grain incident on a granular bed.

    PubMed

    Tanabe, Takahiro; Shimada, Takashi; Ito, Nobuyasu; Nishimori, Hiraku

    2017-02-01

    Using the discrete element method, we study the splash processes induced by the impact of a grain on a randomly packed bed. Good correspondence is obtained between our numerical results and the findings of previous experiments for the movement of ejected grains. Furthermore, the distributions of the ejection angle and ejection speed for individual grains vary depending on the relative timing at which the grains are ejected after the initial impact. Obvious differences are observed between the distributions of grains ejected during the earlier and later splash periods: the form of the vertical ejection-speed distribution varies from a power-law form to a lognormal form with time; this difference may determine grain trajectory after ejection.

  17. Qualitative and numerical analyses of the effects of river inflow variations on mixing diagrams in estuaries

    USGS Publications Warehouse

    Cifuentes, L.A.; Schemel, L.E.; Sharp, J.H.

    1990-01-01

    The effects of river inflow variations on alkalinity/salinity distributions in San Francisco Bay and nitrate/salinity distributions in Delaware Bay are described. One-dimensional, advective-dispersion equations for salinity and the dissolved constituents are solved numerically and are used to simulate mixing in the estuaries. These simulations account for time-varying river inflow, variations in estuarine cross-sectional area, and longitudinally varying dispersion coefficients. The model simulates field observations better than models that use constant hydrodynamic coefficients and uniform estuarine geometry. Furthermore, field observations and model simulations are consistent with theoretical 'predictions' that the curvature of propery-salinity distributions depends on the relation between the estuarine residence time and the period of river concentration variation. ?? 1990.

  18. How old is this bird? The age distribution under some phase sampling schemes.

    PubMed

    Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter

    2017-12-01

    In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.

  19. Kinetic aspects of the coil-stretch transition of polymer chains in dilute solution under extensional flow

    NASA Astrophysics Data System (ADS)

    Hernández Cifre, J. G.; García de la Torre, J.

    2001-11-01

    When linear polymer chains in dilute solution are subject to extensional flow, each chain in the sample experiences the coil-stretch transition at a different time. Using Brownian dynamics simulation, we have studied the distribution of transition times in terms of the extensional rate and the length of the chains. If instead of time one characterizes the effect of the flow by the accumulated strain, then the distribution and its moments seem to take general forms, independent of molecular weight and flow rate, containing some numerical, universal constants that have been evaluated from the dynamical simulation. The kinetics of the transition, expressed by the time-dependence of the fraction of remaining coils, has also been simulated, and the results for the kinetic rate constant has been rationalized in a manner similar to that used for the transition time. The molecular individualism, characterized in this work by the distribution of transition times, is related to the excess of the applied extensional rate over its critical value, which will determine the transition time and other features of the coil-stretch transition.

  20. Rapidity window dependences of higher order cumulants and diffusion master equation

    NASA Astrophysics Data System (ADS)

    Kitazawa, Masakiyo

    2015-10-01

    We study the rapidity window dependences of higher order cumulants of conserved charges observed in relativistic heavy ion collisions. The time evolution and the rapidity window dependence of the non-Gaussian fluctuations are described by the diffusion master equation. Analytic formulas for the time evolution of cumulants in a rapidity window are obtained for arbitrary initial conditions. We discuss that the rapidity window dependences of the non-Gaussian cumulants have characteristic structures reflecting the non-equilibrium property of fluctuations, which can be observed in relativistic heavy ion collisions with the present detectors. It is argued that various information on the thermal and transport properties of the hot medium can be revealed experimentally by the study of the rapidity window dependences, especially by the combined use, of the higher order cumulants. Formulas of higher order cumulants for a probability distribution composed of sub-probabilities, which are useful for various studies of non-Gaussian cumulants, are also presented.

  1. Maintenance of phenotypic variation: Repeatability, heritability and size-dependent processes in a wild brook trout population

    USGS Publications Warehouse

    Letcher, B.H.; Coombs, J.A.; Nislow, K.H.

    2011-01-01

    Phenotypic variation in body size can result from within-cohort variation in birth dates, among-individual growth variation and size-selective processes. We explore the relative effects of these processes on the maintenance of wide observed body size variation in stream-dwelling brook trout (Salvelinus fontinalis). Based on the analyses of multiple recaptures of individual fish, it appears that size distributions are largely determined by the maintenance of early size variation. We found no evidence for size-dependent compensatory growth (which would reduce size variation) and found no indication that size-dependent survival substantially influenced body size distributions. Depensatory growth (faster growth by larger individuals) reinforced early size variation, but was relatively strong only during the first sampling interval (age-0, fall). Maternal decisions on the timing and location of spawning could have a major influence on early, and as our results suggest, later (>age-0) size distributions. If this is the case, our estimates of heritability of body size (body length=0.25) will be dominated by processes that generate and maintain early size differences. As a result, evolutionary responses to environmental change that are mediated by body size may be largely expressed via changes in the timing and location of reproduction. Published 2011. This article is a US Government work and is in the public domain in the USA.

  2. Trap densities and transport properties of pentacene metal-oxide-semiconductor transistors. I. Analytical modeling of time-dependent characteristics

    NASA Astrophysics Data System (ADS)

    Basile, A. F.; Cramer, T.; Kyndiah, A.; Biscarini, F.; Fraboni, B.

    2014-06-01

    Metal-oxide-semiconductor (MOS) transistors fabricated with pentacene thin films were characterized by temperature-dependent current-voltage (I-V) characteristics, time-dependent current measurements, and admittance spectroscopy. The channel mobility shows almost linear variation with temperature, suggesting that only shallow traps are present in the semiconductor and at the oxide/semiconductor interface. The admittance spectra feature a broad peak, which can be modeled as the sum of a continuous distribution of relaxation times. The activation energy of this peak is comparable to the polaron binding energy in pentacene. The absence of trap signals in the admittance spectra confirmed that both the semiconductor and the oxide/semiconductor interface have negligible density of deep traps, likely owing to the passivation of SiO2 before pentacene growth. Nevertheless, current instabilities were observed in time-dependent current measurements following the application of gate-voltage pulses. The corresponding activation energy matches the energy of a hole trap in SiO2. We show that hole trapping in the oxide can explain both the temperature and the time dependences of the current instabilities observed in pentacene MOS transistors. The combination of these experimental techniques allows us to derive a comprehensive model for charge transport in hybrid architectures where trapping processes occur at various time and length scales.

  3. Workload Capacity: A Response Time-Based Measure of Automation Dependence.

    PubMed

    Yamani, Yusuke; McCarley, Jason S

    2016-05-01

    An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.

  4. Role of Proteome Physical Chemistry in Cell Behavior

    PubMed Central

    2016-01-01

    We review how major cell behaviors, such as bacterial growth laws, are derived from the physical chemistry of the cell’s proteins. On one hand, cell actions depend on the individual biological functionalities of their many genes and proteins. On the other hand, the common physics among proteins can be as important as the unique biology that distinguishes them. For example, bacterial growth rates depend strongly on temperature. This dependence can be explained by the folding stabilities across a cell’s proteome. Such modeling explains how thermophilic and mesophilic organisms differ, and how oxidative damage of highly charged proteins can lead to unfolding and aggregation in aging cells. Cells have characteristic time scales. For example, E. coli can duplicate as fast as 2–3 times per hour. These time scales can be explained by protein dynamics (the rates of synthesis and degradation, folding, and diffusional transport). It rationalizes how bacterial growth is slowed down by added salt. In the same way that the behaviors of inanimate materials can be expressed in terms of the statistical distributions of atoms and molecules, some cell behaviors can be expressed in terms of distributions of protein properties, giving insights into the microscopic basis of growth laws in simple cells. PMID:27513457

  5. Quantification of the Barkhausen noise method for the evaluation of time-dependent degradation

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Won; Kwon, Dongil

    2003-02-01

    The Barkhausen noise (BN) method has long been applied to measure the bulk magnetic properties of magnetic materials. Recently, this important nondestructive testing (NDT) method has been applied to evaluate microstructure, stress distribution analysis, fatigue, creep and fracture characteristics. Until now the BN method has been used only qualitatively in evaluating the variation of BN with variations in material properties. For this reason, few NDT methods have been applied in industrial plants and laboratories. The present investigation studied the coercive force and BN while varying the microstructure of ultrafine-grained steels and SA508 cl.3 steels. This variation was carried out according to the second heat-treatment condition with rolling of ultrafine-grained steels and the simulated time-dependent degradation of SA 508 cl.3 steels. An attempt was also made to quantify BN from the relationship between the velocity of magnetic domain walls and the retarding force, using the coercive force of the domain wall movement. The microstructure variation was analyzed according to time-dependent degradation. Fracture toughness was evaluated quantitatively by measuring the BN from two intermediary parameters; grain size and distribution of nonmagnetic particles. From these measurements, the variation of microstructure and fracture toughness can be directly evaluated by the BN method as an accurate in situ NDT method.

  6. Finite-size effects in the short-time height distribution of the Kardar-Parisi-Zhang equation

    NASA Astrophysics Data System (ADS)

    Smith, Naftali R.; Meerson, Baruch; Sasorov, Pavel

    2018-02-01

    We use the optimal fluctuation method to evaluate the short-time probability distribution P(H, L, t) of height at a single point, H=h(x=0, t) , of the evolving Kardar-Parisi-Zhang (KPZ) interface h(x, t) on a ring of length 2L. The process starts from a flat interface. At short times typical (small) height fluctuations are unaffected by the KPZ nonlinearity and belong to the Edwards-Wilkinson universality class. The nonlinearity, however, strongly affects the (asymmetric) tails of P(H) . At large L/\\sqrt{t} the faster-decaying tail has a double structure: it is L-independent, -\\lnP˜≤ft\\vert H\\right\\vert 5/2/t1/2 , at intermediately large \\vert H\\vert , and L-dependent, -\\lnP˜ ≤ft\\vert H\\right\\vert 2L/t , at very large \\vert H\\vert . The transition between these two regimes is sharp and, in the large L/\\sqrt{t} limit, behaves as a fractional-order phase transition. The transition point H=Hc+ depends on L/\\sqrt{t} . At small L/\\sqrt{t} , the double structure of the faster tail disappears, and only the very large-H tail, -\\lnP˜ ≤ft\\vert H\\right\\vert 2L/t , is observed. The slower-decaying tail does not show any L-dependence at large L/\\sqrt{t} , where it coincides with the slower tail of the GOE Tracy-Widom distribution. At small L/\\sqrt{t} this tail also has a double structure. The transition between the two regimes occurs at a value of height H=Hc- which depends on L/\\sqrt{t} . At L/\\sqrt{t} \\to 0 the transition behaves as a mean-field-like second-order phase transition. At \\vert H\\vert <\\vert H_c-\\vert the slower tail behaves as -\\lnP˜ ≤ft\\vert H\\right\\vert 2L/t , whereas at \\vert H\\vert >\\vert H_c-\\vert it coincides with the slower tail of the GOE Tracy-Widom distribution.

  7. First-principles electron dynamics control simulation of diamond under femtosecond laser pulse train irradiation.

    PubMed

    Wang, Cong; Jiang, Lan; Wang, Feng; Li, Xin; Yuan, Yanping; Xiao, Hai; Tsai, Hai-Lung; Lu, Yongfeng

    2012-07-11

    A real-time and real-space time-dependent density functional is applied to simulate the nonlinear electron-photon interactions during shaped femtosecond laser pulse train ablation of diamond. Effects of the key pulse train parameters such as the pulse separation, spatial/temporal pulse energy distribution and pulse number per train on the electron excitation and energy absorption are discussed. The calculations show that photon-electron interactions and transient localized electron dynamics can be controlled including photon absorption, electron excitation, electron density, and free electron distribution by the ultrafast laser pulse train.

  8. An open source platform for multi-scale spatially distributed simulations of microbial ecosystems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segre, Daniel

    2014-08-14

    The goal of this project was to develop a tool for facilitating simulation, validation and discovery of multiscale dynamical processes in microbial ecosystems. This led to the development of an open-source software platform for Computation Of Microbial Ecosystems in Time and Space (COMETS). COMETS performs spatially distributed time-dependent flux balance based simulations of microbial metabolism. Our plan involved building the software platform itself, calibrating and testing it through comparison with experimental data, and integrating simulations and experiments to address important open questions on the evolution and dynamics of cross-feeding interactions between microbial species.

  9. Using heterogeneous wireless sensor networks in a telemonitoring system for healthcare.

    PubMed

    Corchado, Juan M; Bajo, Javier; Tapia, Dante I; Abraham, Ajith

    2010-03-01

    Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.

  10. Partial wetting gas-liquid segmented flow microreactor.

    PubMed

    Kazemi Oskooei, S Ali; Sinton, David

    2010-07-07

    A microfluidic reactor strategy for reducing plug-to-plug transport in gas-liquid segmented flow microfluidic reactors is presented. The segmented flow is generated in a wetting portion of the chip that transitions downstream to a partially wetting reaction channel that serves to disconnect the liquid plugs. The resulting residence time distributions show little dependence on channel length, and over 60% narrowing in residence time distribution as compared to an otherwise similar reactor. This partial wetting strategy mitigates a central limitation (plug-to-plug dispersion) while preserving the many attractive features of gas-liquid segmented flow reactors.

  11. Continuous distribution of emission states from single CdSe/ZnS quantum dots.

    PubMed

    Zhang, Kai; Chang, Hauyee; Fu, Aihua; Alivisatos, A Paul; Yang, Haw

    2006-04-01

    The photoluminescence dynamics of colloidal CdSe/ZnS/streptavidin quantum dots were studied using time-resolved single-molecule spectroscopy. Statistical tests of the photon-counting data suggested that the simple "on/off" discrete state model is inconsistent with experimental results. Instead, a continuous emission state distribution model was found to be more appropriate. Autocorrelation analysis of lifetime and intensity fluctuations showed a nonlinear correlation between them. These results were consistent with the model that charged quantum dots were also emissive, and that time-dependent charge migration gave rise to the observed photoluminescence dynamics.

  12. The time-dependent distribution of 125I-asialo-orosomucoid-horseradish peroxidase and 131I-immunoglobulin A among three endosomal subfractions isolated from rat liver.

    PubMed Central

    Kennedy, G; Cooper, C

    1988-01-01

    Three discrete endosomal fractions showing a time-dependent uptake of radioactive ligand were partially purified from rat liver. The 3,3'-diaminobenzidine (DAB)-induced density-shift protocol of Courtoy, Quintart & Baudhuin [(1984) J. Cell Biol. 98, 870-876] was used to study the distribution among these three endosomal fractions of two ligands with different intracellular destinations. Rats received both 125I-asialo-orosomucoid-horseradish peroxidase (125I-ASOR-HRP) and 131I-dIgA simultaneously by intraportal injection. The liver was fractionated at various times after injection, the three ligand-containing endosomal fractions (A, B and C) were separated and each was subjected separately to the DAB-induced density-shift procedure in which only vesicles containing 125I-ASOR-HRP are increased in density. Information on whether 131I-dIgA was co-localized or segregated from 125I-ASOR-HRP was obtained. The two ligands in the A fraction were partly segregated and partly co-localized, and this distribution appeared to be relatively unchanged with time. The two ligands in the B fraction were co-localized at all times studied. We have tentatively identified the B fraction as a compartment in which vesicle fusion has occurred. The two ligands in the C fraction were also partly co-localized and partly segregated, but the 131I-dIgA became increasingly segregated with time. This represents the first report of the purification of an endosomal subfraction specifically involved in the accumulation of multiple ligands. Images Fig. 7. PMID:3421920

  13. Simulating Pre-Asymptotic, Non-Fickian Transport Although Doing Simple Random Walks - Supported By Empirical Pore-Scale Velocity Distributions and Memory Effects

    NASA Astrophysics Data System (ADS)

    Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.

    2016-12-01

    Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.

  14. Transport and Reactive Flow Modelling Using A Particle Tracking Method Based on Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Oliveira, R.; Bijeljic, B.; Blunt, M. J.; Colbourne, A.; Sederman, A. J.; Mantle, M. D.; Gladden, L. F.

    2017-12-01

    Mixing and reactive processes have a large impact on the viability of enhanced oil and gas recovery projects that involve acid stimulation and CO2 injection. To achieve a successful design of the injection schemes an accurate understanding of the interplay between pore structure, flow and reactive transport is necessary. Dependent on transport and reactive conditions, this complex coupling can also be dependent on initial rock heterogeneity across a variety of scales. To address these issues, we devise a new method to study transport and reactive flow in porous media at multiple scales. The transport model is based on an efficient Particle Tracking Method based on Continuous Time Random Walks (CTRW-PTM) on a lattice. Transport is modelled using an algorithm described in Rhodes and Blunt (2006) and Srinivasan et al. (2010); this model is expanded to enable for reactive flow predictions in subsurface rock undergoing a first-order fluid/solid chemical reaction. The reaction-induced alteration in fluid/solid interface is accommodated in the model through changes in porosity and flow field, leading to time dependent transport characteristics in the form of transit time distributions which account for rock heterogeneity change. This also enables the study of concentration profiles at the scale of interest. Firstly, we validate transport model by comparing the probability of molecular displacement (propagators) measured by Nuclear Magnetic Resonance (NMR) with our modelled predictions for concentration profiles. The experimental propagators for three different porous media of increasing complexity, a beadpack, a Bentheimer sandstone and a Portland carbonate, show a good agreement with the model. Next, we capture the time evolution of the propagators distribution in a reactive flow experiment, where hydrochloric acid is injected into a limestone rock. We analyse the time-evolving non-Fickian signatures for the transport during reactive flow and observe an increase in transport heterogeneity at latter times, representing the increase in rock heterogeneity. Evolution of transit time distribution is associated with the evolution of concentration profiles, thus highlighting the impact of initial rock structure on the reactive transport for a range of Pe and Da numbers.

  15. The transport of nitric oxide in the upper atmosphere by planetary waves and the zonal mean circulation

    NASA Technical Reports Server (NTRS)

    Jones, G. A.; Avery, S. K.

    1982-01-01

    A time-dependent numerical model was developed and used to study the interaction between planetary waves, the zonal mean circulation, and the trace constituent nitric oxide in the region between 55 km and 120 km. The factors which contribute to the structure of the nitric oxide distribution were examined, and the sensitivity of the distribution to changes in planetary wave amplitude was investigated. Wave-induced changes in the mean nitric oxide concentration were examined as a possible mechanism for the observed winter anomaly. Results indicate that vertically-propagating planetary waves induce a wave-like structure in the nitric oxide distribution and that at certain levels, transports of nitric oxide by planetary waves could significantly affect the mean nitric oxide distribution. The magnitude and direction of these transports at a given level was found to depend not only on the amplitude of the planetary wave, but also on the loss rate of nitric oxide at that level.

  16. Trapped Ring Current Ion Dynamics During the 17-18 March 2015 Geomagnetic Storm Obtained from TWINS ENA Images

    NASA Astrophysics Data System (ADS)

    Perez, J. D.; Goldstein, J.; McComas, D. J.; Valek, P. W.; Fok, M. C. H.; Hwang, K. J.

    2015-12-01

    On 17-18 March 2015, there was a large (minimum SYM/H < -200 nT) geomagnetic storm. The Two Wide-Angle Imaging Neutral Atom Spectrometers (TWINS) mission, the first stereoscopic ENA magnetospheric imager, provides global images of the inner magnetosphere from which global distributions of ion flux, energy spectra, and pitch angle distributions are obtained. We will show how the observed ion pressure correlates with SYM/H. Examples of multiple peaks in the ion spatial distribution which may be due to multiple injections and/or energy and pitch angle dependent drift will be illustrated. Energy spectra will be shown to be non-Maxwellian, frequently having two peaks, one in the 10 keV range and another near 40 keV. Pitch angle distributions will be shown to have generally perpendicular anisotropy and that this can be time, space and energy dependent. The results are consistent with Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model simulations.

  17. Robustness and Vulnerability of Networks with Dynamical Dependency Groups.

    PubMed

    Bai, Ya-Nan; Huang, Ning; Wang, Lei; Wu, Zhi-Xi

    2016-11-28

    The dependency property and self-recovery of failure nodes both have great effects on the robustness of networks during the cascading process. Existing investigations focused mainly on the failure mechanism of static dependency groups without considering the time-dependency of interdependent nodes and the recovery mechanism in reality. In this study, we present an evolving network model consisting of failure mechanisms and a recovery mechanism to explore network robustness, where the dependency relations among nodes vary over time. Based on generating function techniques, we provide an analytical framework for random networks with arbitrary degree distribution. In particular, we theoretically find that an abrupt percolation transition exists corresponding to the dynamical dependency groups for a wide range of topologies after initial random removal. Moreover, when the abrupt transition point is above the failure threshold of dependency groups, the evolving network with the larger dependency groups is more vulnerable; when below it, the larger dependency groups make the network more robust. Numerical simulations employing the Erdős-Rényi network and Barabási-Albert scale free network are performed to validate our theoretical results.

  18. Design of Distributed Engine Control Systems with Uncertain Delay.

    PubMed

    Liu, Xiaofeng; Li, Yanxi; Sun, Xu

    Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method.

  19. How required reserve ratio affects distribution and velocity of money

    NASA Astrophysics Data System (ADS)

    Xi, Ning; Ding, Ning; Wang, Yougui

    2005-11-01

    In this paper the dependence of wealth distribution and the velocity of money on the required reserve ratio is examined based on a random transfer model of money and computer simulations. A fractional reserve banking system is introduced to the model where money creation can be achieved by bank loans and the monetary aggregate is determined by the monetary base and the required reserve ratio. It is shown that monetary wealth follows asymmetric Laplace distribution and latency time of money follows exponential distribution. The expression of monetary wealth distribution and that of the velocity of money in terms of the required reserve ratio are presented in a good agreement with simulation results.

  20. Design of Distributed Engine Control Systems with Uncertain Delay

    PubMed Central

    Li, Yanxi; Sun, Xu

    2016-01-01

    Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method. PMID:27669005

  1. Multivariate stochastic analysis for Monthly hydrological time series at Cuyahoga River Basin

    NASA Astrophysics Data System (ADS)

    zhang, L.

    2011-12-01

    Copula has become a very powerful statistic and stochastic methodology in case of the multivariate analysis in Environmental and Water resources Engineering. In recent years, the popular one-parameter Archimedean copulas, e.g. Gumbel-Houggard copula, Cook-Johnson copula, Frank copula, the meta-elliptical copula, e.g. Gaussian Copula, Student-T copula, etc. have been applied in multivariate hydrological analyses, e.g. multivariate rainfall (rainfall intensity, duration and depth), flood (peak discharge, duration and volume), and drought analyses (drought length, mean and minimum SPI values, and drought mean areal extent). Copula has also been applied in the flood frequency analysis at the confluences of river systems by taking into account the dependence among upstream gauge stations rather than by using the hydrological routing technique. In most of the studies above, the annual time series have been considered as stationary signal which the time series have been assumed as independent identically distributed (i.i.d.) random variables. But in reality, hydrological time series, especially the daily and monthly hydrological time series, cannot be considered as i.i.d. random variables due to the periodicity existed in the data structure. Also, the stationary assumption is also under question due to the Climate Change and Land Use and Land Cover (LULC) change in the fast years. To this end, it is necessary to revaluate the classic approach for the study of hydrological time series by relaxing the stationary assumption by the use of nonstationary approach. Also as to the study of the dependence structure for the hydrological time series, the assumption of same type of univariate distribution also needs to be relaxed by adopting the copula theory. In this paper, the univariate monthly hydrological time series will be studied through the nonstationary time series analysis approach. The dependence structure of the multivariate monthly hydrological time series will be studied through the copula theory. As to the parameter estimation, the maximum likelihood estimation (MLE) will be applied. To illustrate the method, the univariate time series model and the dependence structure will be determined and tested using the monthly discharge time series of Cuyahoga River Basin.

  2. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curchod, Basile F. E.; Agostini, Federica, E-mail: agostini@mpi-halle.mpg.de; Gross, E. K. U.

    Nonadiabatic quantum interferences emerge whenever nuclear wavefunctions in different electronic states meet and interact in a nonadiabatic region. In this work, we analyze how nonadiabatic quantum interferences translate in the context of the exact factorization of the molecular wavefunction. In particular, we focus our attention on the shape of the time-dependent potential energy surface—the exact surface on which the nuclear dynamics takes place. We use a one-dimensional exactly solvable model to reproduce different conditions for quantum interferences, whose characteristic features already appear in one-dimension. The time-dependent potential energy surface develops complex features when strong interferences are present, in clear contrastmore » to the observed behavior in simple nonadiabatic crossing cases. Nevertheless, independent classical trajectories propagated on the exact time-dependent potential energy surface reasonably conserve a distribution in configuration space that mimics one of the exact nuclear probability densities.« less

  4. Decomposing intraday dependence in currency markets: evidence from the AUD/USD spot market

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Ellis, Craig A.; Hogan, Warren P.

    2005-07-01

    The local Hurst exponent, a measure employed to detect the presence of dependence in a time series, may also be used to investigate the source of intraday variation observed in the returns in foreign exchange markets. Given that changes in the local Hurst exponent may be due to either a time-varying range, or standard deviation, or both of these simultaneously, values for the range, standard deviation and local Hurst exponent are recorded and analyzed separately. To illustrate this approach, a high-frequency data set of the spot Australian dollar/US dollar provides evidence of the returns distribution across the 24-hour trading ‘day’, with time-varying dependence and volatility clearly aligning with the opening and closing of markets. This variation is attributed to the effects of liquidity and the price-discovery actions of dealers.

  5. The Locations of Ring Current Pressure Peaks: Comparison of TWINS Measurements and CIMI Simulations for the 7-10 September 2015 CIR Storm

    NASA Astrophysics Data System (ADS)

    Hill, S. C.; Edmond, J. A.; Xu, H.; Perez, J. D.; Fok, M. C. H.; Goldstein, J.; McComas, D. J.; Valek, P. W.

    2017-12-01

    The characteristics of a four day 7-10 September 2015 co-rotating interaction region (CIR) storm (min. SYM/H ≤ -110 nT) are categorized by storm phase. Ion distributions of trapped particles in the ring current as measured by the Two Wide-Angle Imaging Neutral Atom Spectrometers (TWINS) are compared with the simulated ion distributions of the Comprehensive Inner Magnetosphere-Ionosphere Model (CIMI). The energetic neutral atom (ENA) images obtained by TWINS are deconvolved to extract equatorial pitch angle, energy spectra, ion pressure intensity, and ion pressure anisotropy distributions in the inner magnetosphere. CIMI, using either a self-consistent electric field or a semi-empirical electric field, simulates comparable distributions. There is good agreement between the data measured by TWINS and the different distributions produced by the self-consistent electric field and the semi-empirical electric field of CIMI. Throughout the storm the pitch angle distribution (PAD) is mostly perpendicular in both CIMI and TWINS and there is agreement between the anisotropy distributions. The locations of the ion pressure peaks seen by TWINS and by the self-consistent and semi empirical electric field parameters in CIMI are usually between dusk and midnight. On average, the self-consistent electric field in CIMI reveals ion pressure peaks closer to Earth than its semi empirical counterpart, while TWINS reports somewhat larger radial values for the ion pressure peak locations. There are also notable events throughout the storm during which the simulated observations show some characteristics that differ from those measured by TWINS. At times, there are ion pressure peaks with magnetic local time on the dayside and in the midnight to dawn region. We discuss these events in light of substorm injections indicated by fluctuating peaks in the AE index and a positive By component in the solar wind. There are also times in which there are multiple ion pressure peaks. This may imply that there are time dependent and spatially dependent injection events that are influenced by local reconnection regions in the tail of the magnetosphere. Using CIMI simulations, we present paths of particles with various energies to assist in interpreting these notable events.

  6. Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data

    DTIC Science & Technology

    2014-12-01

    Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line

  7. Blading Design for Axial Turbomachines

    DTIC Science & Technology

    1989-05-01

    three- dimensional, viscous computation systems appear to have a long development period ahead, in which fluid shear stress modeling and computation time ...and n directions and T is the shear stress , As a consequence the solution time is longer than for integral methods, dependent largely on thc accuracy of...distributions over airfoils is an adaptation of thin plate deflection theory from stress analysis. At the same time , it minimizes designer effort

  8. Time-dependent source model of the Lusi mud volcano

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  9. Implications of Atmospheric Test Fallout Data for Nuclear Winter.

    NASA Astrophysics Data System (ADS)

    Baker, George Harold, III

    1987-09-01

    Atmospheric test fallout data have been used to determine admissable dust particle size distributions for nuclear winter studies. The research was originally motivated by extreme differences noted in the magnitude and longevity of dust effects predicted by particle size distributions routinely used in fallout predictions versus those used for nuclear winter studies. Three different sets of historical data have been analyzed: (1) Stratospheric burden of Strontium -90 and Tungsten-185, 1954-1967 (92 contributing events); (2) Continental U.S. Strontium-90 fallout through 1958 (75 contributing events); (3) Local Fallout from selected Nevada tests (16 events). The contribution of dust to possible long term climate effects following a nuclear exchange depends strongly on the particle size distribution. The distribution affects both the atmospheric residence time and optical depth. One dimensional models of stratospheric/tropospheric fallout removal were developed and used to identify optimum particle distributions. Results indicate that particle distributions which properly predict bulk stratospheric activity transfer tend to be somewhat smaller than number size distributions used in initial nuclear winter studies. In addition, both ^{90}Sr and ^ {185}W fallout behavior is better predicted by the lognormal distribution function than the prevalent power law hybrid function. It is shown that the power law behavior of particle samples may well be an aberration of gravitational cloud stratification. Results support the possible existence of two independent particle size distributions in clouds generated by surface or near surface bursts. One distribution governs late time stratospheric fallout, the other governs early time fallout. A bimodal lognormal distribution is proposed to describe the cloud particle population. The distribution predicts higher initial sunlight attenuation and lower late time attenuation than the power law hybrid function used in initial nuclear winter studies.

  10. Distributions-per-level: a means of testing level detectors and models of patch-clamp data.

    PubMed

    Schröder, I; Huth, T; Suitchmezian, V; Jarosik, J; Schnell, S; Hansen, U P

    2004-01-01

    Level or jump detectors generate the reconstructed time series from a noisy record of patch-clamp current. The reconstructed time series is used to create dwell-time histograms for the kinetic analysis of the Markov model of the investigated ion channel. It is shown here that some additional lines in the software of such a detector can provide a powerful new means of patch-clamp analysis. For each current level that can be recognized by the detector, an array is declared. The new software assigns every data point of the original time series to the array that belongs to the actual state of the detector. From the data sets in these arrays distributions-per-level are generated. Simulated and experimental time series analyzed by Hinkley detectors are used to demonstrate the benefits of these distributions-per-level. First, they can serve as a test of the reliability of jump and level detectors. Second, they can reveal beta distributions as resulting from fast gating that would usually be hidden in the overall amplitude histogram. Probably the most valuable feature is that the malfunctions of the Hinkley detectors turn out to depend on the Markov model of the ion channel. Thus, the errors revealed by the distributions-per-level can be used to distinguish between different putative Markov models of the measured time series.

  11. Memoryless control of boundary concentrations of diffusing particles.

    PubMed

    Singer, A; Schuss, Z; Nadler, B; Eisenberg, R S

    2004-12-01

    Flux between regions of different concentration occurs in nearly every device involving diffusion, whether an electrochemical cell, a bipolar transistor, or a protein channel in a biological membrane. Diffusion theory has calculated that flux since the time of Fick (1855), and the flux has been known to arise from the stochastic behavior of Brownian trajectories since the time of Einstein (1905), yet the mathematical description of the behavior of trajectories corresponding to different types of boundaries is not complete. We consider the trajectories of noninteracting particles diffusing in a finite region connecting two baths of fixed concentrations. Inside the region, the trajectories of diffusing particles are governed by the Langevin equation. To maintain average concentrations at the boundaries of the region at their values in the baths, a control mechanism is needed to set the boundary dynamics of the trajectories. Different control mechanisms are used in Langevin and Brownian simulations of such systems. We analyze models of controllers and derive equations for the time evolution and spatial distribution of particles inside the domain. Our analysis shows a distinct difference between the time evolution and the steady state concentrations. While the time evolution of the density is governed by an integral operator, the spatial distribution is governed by the familiar Fokker-Planck operator. The boundary conditions for the time dependent density depend on the model of the controller; however, this dependence disappears in the steady state, if the controller is of a renewal type. Renewal-type controllers, however, produce spurious boundary layers that can be catastrophic in simulations of charged particles, because even a tiny net charge can have global effects. The design of a nonrenewal controller that maintains concentrations of noninteracting particles without creating spurious boundary layers at the interface requires the solution of the time-dependent Fokker-Planck equation with absorption of outgoing trajectories and a source of ingoing trajectories on the boundary (the so called albedo problem).

  12. Correlated Time-Variation of Asphalt Rheology and Bulk Microstructure

    NASA Astrophysics Data System (ADS)

    Ramm, Adam; Nazmus, Sakib; Bhasin, Amit; Downer, Michael

    We use noncontact optical microscopy and optical scattering in the visible and near-infrared spectrum on Performance Grade (PG) asphalt binder to confirm the existence of microstructures in the bulk. The number of visible microstructures increases linearly as penetration depth of the incident radiation increases, which verifies a uniform volume distribution of microstructures. We use dark field optical scatter in the near-infrared to measure the temperature dependent behavior of the bulk microstructures and compare this behavior with Dynamic Shear Rheometer (DSR) measurements of the bulk complex shear modulus | G* (T) | . The main findings are: (1) After reaching thermal equilibrium, both temperature dependent optical scatter intensity (I (T)) and bulk shear modulus (| G* (T) |) continue to change appreciably for times much greater than thermal equilibration times. (2) The hysteresis behavior during a complete temperature cycle seen in previous work derives from a larger time dependence in the cooling step compared with the heating step. (3) Different binder aging conditions show different thermal time-variations for both I (T) and | G* (T) | .

  13. Cometary atmospheres: Modeling the spatial distribution of observed neutral radicals

    NASA Technical Reports Server (NTRS)

    Combi, M. R.

    1985-01-01

    Progress on modeling the spatial distributions of cometary radicals is described. The Monte Carlo particle-trajectory model was generalized to include the full time dependencies of initial comet expansion velocities, nucleus vaporization rates, photochemical lifetimes and photon emission rates which enter the problem through the comet's changing heliocentric distance and velocity. The effect of multiple collisions in the transition zone from collisional coupling to true free flow were also included. Currently available observations of the spatial distributions of the neutral radicals, as well as the latest available photochemical data were re-evaluated. Preliminary exploratory model results testing the effects of various processes on observable spatial distributions are also discussed.

  14. Phase space explorations in time dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Rajam, Aruna K.

    Time dependent density functional theory (TDDFT) is one of the useful tools for the study of the dynamic behavior of correlated electronic systems under the influence of external potentials. The success of this formally exact theory practically relies on approximations for the exchange-correlation potential which is a complicated functional of the co-ordinate density, non-local in space and time. Adiabatic approximations (such as ALDA), which are local in time, are most commonly used in the increasing applications of the field. Going beyond ALDA, has been proved difficult leading to mathematical inconsistencies. We explore the regions where the theory faces challenges, and try to answer some of them via the insights from two electron model systems. In this thesis work we propose a phase-space extension of the TDDFT. We want to answer the challenges the theory is facing currently by exploring the one-body phase-space. We give a general introduction to this theory and its mathematical background in the first chapter. In second chapter, we carryout a detailed study of instantaneous phase-space densities and argue that the functionals of distributions can be a better alternative to the nonlocality issue of the exchange-correlation potentials. For this we study in detail the interacting and the non-interacting phase-space distributions for Hookes atom model. The applicability of ALDA-based TDDFT for the dynamics in strongfields can become severely problematic due to the failure of single-Slater determinant picture.. In the third chapter, we analyze how the phase-space distributions can shine some light into this problem. We do a comparative study of Kohn-Sham and interacting phase-space and momentum distributions for single ionization and double ionization systems. Using a simple model of two-electron systems, we have showed that the momentum distribution computed directly from the exact KS system contains spurious oscillations: a non-classical description of the essentially classical two-electron dynamics. In Time dependent density matrix functional theory (TDDMFT), the evolution scheme of the 1RDM (first order reduced density matrix) contains second-order reduced density matrix (2RDM), which has to be expressed in terms of 1RDMs. Any non-correlated approximations (Hartree-Fock) for 2RDM would fail to capture the natural occupations of the system. In our fourth chapter, we show that by applying the quasi-classical and semi-classical approximations one can capture the natural occupations of the excited systems. We study a time-dependent Moshinsky atom model for this. The fifth chapter contains a comparative work on the existing non-local exchange-correlation kernels that are based on current density response frame work and the co-moving frame work. We show that the two approaches though coinciding with each other in linear response regime, actually turn out to be different in non-linear regime.

  15. Random walk in degree space and the time-dependent Watts-Strogatz model

    NASA Astrophysics Data System (ADS)

    Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  16. Random walk in degree space and the time-dependent Watts-Strogatz model.

    PubMed

    Casa Grande, H L; Cotacallapa, M; Hase, M O

    2017-01-01

    In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.

  17. Quantum work statistics of charged Dirac particles in time-dependent fields

    DOE PAGES

    Deffner, Sebastian; Saxena, Avadh

    2015-09-28

    The quantum Jarzynski equality is an important theorem of modern quantum thermodynamics. We show that the Jarzynski equality readily generalizes to relativistic quantum mechanics described by the Dirac equation. After establishing the conceptual framework we solve a pedagogical, yet experimentally relevant, system analytically. As a main result we obtain the exact quantum work distributions for charged particles traveling through a time-dependent vector potential evolving under Schrödinger as well as under Dirac dynamics, and for which the Jarzynski equality is verified. Thus, special emphasis is put on the conceptual and technical subtleties arising from relativistic quantum mechanics.

  18. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  19. Avalanches and plasticity for colloids in a time dependent optical trap

    DOE PAGES

    Olson Reichhardt, Cynthia Jane; McDermott, Danielle Marie; Reichhardt, Charles

    2015-08-25

    Here, with the use of optical traps it is possible to confine assemblies of colloidal particles in two-dimensional and quasi-one-dimensional arrays. Here we examine how colloidal particles rearrange in a quasi-one-dimensional trap with a time dependent confining potential. The particle motion occurs both through slow elastic uniaxial distortions as well as through abrupt large-scale two-dimensional avalanches associated with plastic rearrangements. During the avalanches the particle velocity distributions extend over a broad range and can be fit to a power law consistent with other studies of plastic events mediated by dislocations.

  20. Optical extinction dependence on wavelength and size distribution of airborne dust

    NASA Astrophysics Data System (ADS)

    Pangle, Garrett E.; Hook, D. A.; Long, Brandon J. N.; Philbrick, C. R.; Hallen, Hans D.

    2013-05-01

    The optical scattering from laser beams propagating through atmospheric aerosols has been shown to be very useful in describing air pollution aerosol properties. This research explores and extends that capability to particulate matter. The optical properties of Arizona Road Dust (ARD) samples are measured in a chamber that simulates the particle dispersal of dust aerosols in the atmospheric environment. Visible, near infrared, and long wave infrared lasers are used. Optical scattering measurements show the expected dependence of laser wavelength and particle size on the extinction of laser beams. The extinction at long wavelengths demonstrates reduced scattering, but chemical absorption of dust species must be considered. The extinction and depolarization of laser wavelengths interacting with several size cuts of ARD are examined. The measurements include studies of different size distributions, and their evolution over time is recorded by an Aerodynamic Particle Sizer. We analyze the size-dependent extinction and depolarization of ARD. We present a method of predicting extinction for an arbitrary ARD size distribution. These studies provide new insights for understanding the optical propagation of laser beams through airborne particulate matter.

  1. Method and device for landing aircraft dependent on runway occupancy time

    NASA Technical Reports Server (NTRS)

    Ghalebsaz Jeddi, Babak (Inventor)

    2012-01-01

    A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.

  2. Time-dependent spatial intensity profiles of near-infrared idler pulses from nanosecond optical parametric oscillators

    NASA Astrophysics Data System (ADS)

    Olafsen, L. J.; Olafsen, J. S.; Eaves, I. K.

    2018-06-01

    We report on an experimental investigation of the time-dependent spatial intensity distribution of near-infrared idler pulses from an optical parametric oscillator measured using an infrared (IR) camera, in contrast to beam profiles obtained using traditional knife-edge techniques. Comparisons show the information gained by utilizing the thermal camera provides more detail than the spatially- or time-averaged measurements from a knife-edge profile. Synchronization, averaging, and thresholding techniques are applied to enhance the images acquired. The additional information obtained can improve the process by which semiconductor devices and other IR lasers are characterized for their beam quality and output response and thereby result in IR devices with higher performance.

  3. Modeling correlated bursts by the bursty-get-burstier mechanism

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun

    2017-12-01

    Temporal correlations of time series or event sequences in natural and social phenomena have been characterized by power-law decaying autocorrelation functions with decaying exponent γ . Such temporal correlations can be understood in terms of power-law distributed interevent times with exponent α and/or correlations between interevent times. The latter, often called correlated bursts, has recently been studied by measuring power-law distributed bursty trains with exponent β . A scaling relation between α and γ has been established for the uncorrelated interevent times, while little is known about the effects of correlated interevent times on temporal correlations. In order to study these effects, we devise the bursty-get-burstier model for correlated bursts, by which one can tune the degree of correlations between interevent times, while keeping the same interevent time distribution. We numerically find that sufficiently strong correlations between interevent times could violate the scaling relation between α and γ for the uncorrelated case. A nontrivial dependence of γ on β is also found for some range of α . The implication of our results is discussed in terms of the hierarchical organization of bursty trains at various time scales.

  4. Learning rules for spike timing-dependent plasticity depend on dendritic synapse location.

    PubMed

    Letzkus, Johannes J; Kampa, Björn M; Stuart, Greg J

    2006-10-11

    Previous studies focusing on the temporal rules governing changes in synaptic strength during spike timing-dependent synaptic plasticity (STDP) have paid little attention to the fact that synaptic inputs are distributed across complex dendritic trees. During STDP, propagation of action potentials (APs) back to the site of synaptic input is thought to trigger plasticity. However, in pyramidal neurons, backpropagation of single APs is decremental, whereas high-frequency bursts lead to generation of distal dendritic calcium spikes. This raises the question whether STDP learning rules depend on synapse location and firing mode. Here, we investigate this issue at synapses between layer 2/3 and layer 5 pyramidal neurons in somatosensory cortex. We find that low-frequency pairing of single APs at positive times leads to a distance-dependent shift to long-term depression (LTD) at distal inputs. At proximal sites, this LTD could be converted to long-term potentiation (LTP) by dendritic depolarizations suprathreshold for BAC-firing or by high-frequency AP bursts. During AP bursts, we observed a progressive, distance-dependent shift in the timing requirements for induction of LTP and LTD, such that distal synapses display novel timing rules: they potentiate when inputs are activated after burst onset (negative timing) but depress when activated before burst onset (positive timing). These findings could be explained by distance-dependent differences in the underlying dendritic voltage waveforms driving NMDA receptor activation during STDP induction. Our results suggest that synapse location within the dendritic tree is a crucial determinant of STDP, and that synapses undergo plasticity according to local rather than global learning rules.

  5. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  6. Comparison of algorithms to generate event times conditional on time-dependent covariates.

    PubMed

    Sylvestre, Marie-Pierre; Abrahamowicz, Michal

    2008-06-30

    The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.

  7. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  8. Optimal control of a rabies epidemic model with a birth pulse.

    PubMed

    Clayton, Tim; Duke-Sylvester, Scott; Gross, Louis J; Lenhart, Suzanne; Real, Leslie A

    2010-01-01

    A system of ordinary differential equations describes the population dynamics of a rabies epidemic in raccoons. The model accounts for the dynamics of a vaccine, including loss of vaccine due to animal consumption and loss from factors other than racoon uptake. A control method to reduce the spread of disease is introduced through temporal distribution of vaccine packets. This work incorporates the effect of the seasonal birth pulse in the racoon population and the attendant increase in new-borns which are susceptible to the diseases, analysing the impact of the timing and length of this pulse on the optimal distribution of vaccine packets. The optimization criterion is to minimize the number of infected raccoons while minimizing the cost of distributing the vaccine. Using an optimal control setting, numerical results illustrate strategies for distributing the vaccine depending on the timing of the infection outbreak with respect to the birth pulse.

  9. Optimal Control of a Rabies Epidemic Model with a Birth Pulse

    PubMed Central

    Clayton, Tim; Duke-Sylvester, Scott; Gross, Louis J.; Lenhart, Suzanne; Real, Leslie A.

    2011-01-01

    A system of ordinary differential equations describes the populuation dynamics of a rabies epidemic in raccoons. The model accounts for the dynamics of vaccine, including loss of vaccine due to animal consumption and loss from factors other than racoon uptake. A control method to reduce the spread of disease is introduced through temporal distribution of vaccine packets. This work incorporates the effect of the seasonal birth pulse in the racoon population and the attendant increase in new-borns which are susceptible to the diseases, analysing the impact of the timing and length of this pulse on the optimal distribution of vaccine packets. The optimization criterion is to minimize the number of infected raccoons while minimizing the cost of distributing the vaccine. Using an optimal control setting, numerical results illustrate strategies for distributing vaccine depending on the timing of the infection outbreak with respect to the birth pulse. PMID:21423822

  10. Rapid Temporal Changes of Midtropospheric Winds

    NASA Technical Reports Server (NTRS)

    Merceret, Francis J.

    1997-01-01

    The statistical distribution of the magnitude of the vector wind change over 0.25-, 1-, 2-. and 4-h periods based on data from October 1995 through March 1996 over central Florida is presented. The wind changes at altitudes from 6 to 17 km were measured using the Kennedy Space Center 50-MHz Doppler radar wind profiler. Quality controlled profiles were produced every 5 min for 112 gates, each representing 150 m in altitude. Gates 28 through 100 were selected for analysis because of their significance to ascending space launch vehicles. The distribution was found to be lognormal. The parameters of the lognormal distribution depend systematically on the time interval. This dependence is consistent with the behavior of structure functions in the f(exp 5/3) spectral regime. There is a small difference between the 1995 data and the 1996 data, which may represent a weak seasonal effect.

  11. Feast and Famine: regulation of black hole growth in low-redshift galaxies

    NASA Astrophysics Data System (ADS)

    Kauffmann, Guinevere; Heckman, Timothy M.

    2009-07-01

    We analyse the observed distribution of Eddington ratios (L/LEdd) as a function of supermassive black hole mass for a large sample of nearby galaxies drawn from the Sloan Digital Sky Survey. We demonstrate that there are two distinct regimes of black hole growth in nearby galaxies. The first is associated with galaxies with significant star formation [M*/starformationrate (SFR) ~ a Hubble time] in their central kiloparsec regions, and is characterized by a broad lognormal distribution of accretion rates peaked at a few per cent of the Eddington limit. In this regime, the Eddington ratio distribution is independent of the mass of the black hole and shows little dependence on the central stellar population of the galaxy. The second regime is associated with galaxies with old central stellar populations (M*/SFR >> a Hubble time), and is characterized by a power-law distribution function of Eddington ratios. In this regime, the time-averaged mass accretion rate on to black holes is proportional to the mass of stars in the galaxy bulge, with a constant of proportionality that depends on the mean stellar age of the stars. This result is once again independent of black hole mass. We show that both the slope of the power law and the decrease in the accretion rate on to black holes in old galaxies are consistent with population synthesis model predictions of the decline in stellar mass loss rates as a function of mean stellar age. Our results lead to a very simple picture of black hole growth in the local Universe. If the supply of cold gas in a galaxy bulge is plentiful, the black hole regulates its own growth at a rate that does not further depend on the properties of the interstellar medium. Once the gas runs out, black hole growth is regulated by the rate at which evolved stars lose their mass.

  12. Snow cover distribution over elevation zones in a mountainous catchment

    NASA Astrophysics Data System (ADS)

    Panagoulia, D.; Panagopoulos, Y.

    2009-04-01

    A good understanding of the elevetional distribution of snow cover is necessary to predict the timing and volume of runoff. In a complex mountainous terrain the snow cover distribution within a watershed is highly variable in time and space and is dependent on elevation, slope, aspect, vegetation type, surface roughness, radiation load, and energy exchange at the snow-air interface. Decreases in snowpack due to climate change could disrupt the downstream urban and agricultural water supplies, while increases could lead to seasonal flooding. Solar and longwave radiation are dominant energy inputs driving the ablation process. Turbulent energy exchange at the snow cover surface is important during the snow season. The evaporation of blowing and drifting snow is strongly dependent upon wind speed. Much of the spatial heterogeneity of snow cover is the result of snow redistribution by wind. Elevation is important in determining temperature and precipitation gradients along hillslopes, while the temperature gradients determine where precipitation falls as rain and snow and contribute to variable melt rates within the hillslope. Under these premises, the snow accumulation and ablation (SAA) model of the US National Weather Service (US NWS) was applied to implement the snow cover extent over elevation zones of a mountainous catchment (the Mesochora catchment in Western-Central Greece), taking also into account the indirectly included processes of sublimation, interception, and snow redistribution. The catchment hydrology is controlled by snowfall and snowmelt and the simulated discharge was computed from the soil moisture accounting (SMA) model of the US NWS and compared to the measured discharge. The elevationally distributed snow cover extent presented different patterns with different time of maximization, extinction and return during the year, producing different timing of discharge that is a crucial factor for the control and management of water resources systems.

  13. Expression for time travel based on diffusive wave theory: applicability and considerations

    NASA Astrophysics Data System (ADS)

    Aguilera, J. C.; Escauriaza, C. R.; Passalacqua, P.; Gironas, J. A.

    2017-12-01

    Prediction of hydrological response is of utmost importance when dealing with urban planning, risk assessment, or water resources management issues. With the advent of climate change, special care must be taken with respect to variations in rainfall and runoff due to rising temperature averages. Nowadays, while typical workstations have adequate power to run distributed routing hydrological models, it is still not enough for modeling on-the-fly, a crucial ability in a natural disaster context, where rapid decisions must be made. Semi-distributed time travel models, which compute a watershed's hydrograph without explicitly solving the full shallow water equations, appear as an attractive approach to rainfall-runoff modeling since, like fully distributed models, also superimpose a grid on the watershed, and compute runoff based on cell parameter values. These models are heavily dependent on the travel time expression for an individual cell. Many models make use of expressions based on kinematic wave theory, which is not applicable in cases where watershed storage is important, such as mild slopes. This work presents a new expression for concentration times in overland flow, based on diffusive wave theory, which considers not only the effects of storage but also the effects on upstream contribution. Setting upstream contribution equal to zero gives an expression consistent with previous work on diffusive wave theory; on the other hand, neglecting storage effects (i.e.: diffusion,) is shown to be equivalent to kinematic wave theory, currently used in many spatially distributed time travel models. The newly found expression is shown to be dependent on plane discretization, particularly when dealing with very non-kinematic cases. This is shown to be the result of upstream contribution, which gets larger downstream, versus plane length. This result also provides some light on the limits on applicability of the expression: when a certain kinematic threshold is reached, the expression is no longer valid, and one must fall back to kinematic wave theory, for lack of a better option. This expression could be used for improving currently published spatially distributed time travel models, since they would become applicable in many new cases.

  14. Post heroin dose tissue distribution of 6-monoacetylmorphine (6-MAM) with MALDI imaging.

    PubMed

    Teklezgi, Belin G; Pamreddy, Annapurna; Baijnath, Sooraj; Gopal, Nirmala D; Naicker, Tricia; Kruger, Hendrik G; Govender, Thavendran

    2017-08-01

    Heroin is an illicit opioid drug which is commonly abused and leads to dependence and addiction. Heroin is considered a pro-drug and is rapidly converted to its major active metabolite 6-monoacetylmorphine (6-MAM) which mediates euphoria and reward through the stimulation of opioid receptors in the brain. The aim of this study was to investigate the distribution and localization of 6-MAM in the healthy Sprague Dawley rat brain following intraperitoneal (i.p) administration of heroin (10 mg/kg), using matrix-assisted laser desorption/ionization mass spectrometric imaging (MALDI-MSI), in combination with quantification via liquid chromatography mass spectrometry (LC-MS/MS). These findings revealed that 6-MAM is present both in plasma and brain tissue with a T max of 5 min (2.8 µg/mL) and 15 min (1.1 µg/mL), respectively. MSI analysis of the brain showed high intensities of 6-MAM in the thalamus-hypothalamus and mesocorticolimbic system including areas of the cortex, caudate putamen, and ventral pallidum regions. This finding correlates with the distribution of opioid receptors in the brain, according to literature. In addition, we report a time-dependent distribution in the levels of 6-MAM, from 1 min with the highest intensity of the drug observed at 15 min, with sparse distribution at 45 min before decreasing at 60 min. This is the first study to use MSI as a brain imaging technique to detect a morphine's distribution over time in the brain.

  15. Impact melting early in lunar history

    NASA Technical Reports Server (NTRS)

    Lange, M. A.; Ahrens, T. J.

    1979-01-01

    The total amount of impact melt produced during early lunar history is examined in light of theoretically and experimentally determined relations between crater diameter (D) and impact melt volume. The time dependence of the melt production is given by the time dependent impact rate as derived from cratering statistics for two different crater-size classes. Results show that small scale cratering (D less than or equal to 30 km) leads to melt volumes which fit selected observations specifying the amount of impact melt contained in the lunar regolith and in craters with diameters less than 10 km. Larger craters (D greater than 30 km) are capable of forming the abundant impact melt breccias found on the lunar surface. The group of large craters (D greater than 30 km) produces nearly 10 times as much impact melt as all the smaller craters, and thus, the large impacts dominate the modification of the lunar surface. A contradiction between the distribution of radiometric rock ages and a model of exponentially decreasing cratering rate going back to 4.5 b.y. is reflected in uncertainty in the distribution of impact melt as a function of time on the moon.

  16. The decay process of rotating unstable systems through the passage time distribution

    NASA Astrophysics Data System (ADS)

    Jiménez-Aquino, J. I.; Cortés, Emilio; Aquino, N.

    2001-05-01

    In this work we propose a general scheme to characterize, through the passage time distribution, the decay process of rotational unstable systems in the presence of external forces of large amplitude. The formalism starts with a matricial Langevin type equation formulated in the context of two dynamical representations given, respectively, by the vectors x and y, both related by a time dependent rotation matrix. The transformation preserves the norm of the vector and decouples the set of dynamical equations in the transformed space y. We study the dynamical characterization of the systems of two variables and show that the statistical properties of the passage time distribution are essentially equivalent in both dynamics. The theory is applied to the laser system studied in Dellunde et al. (Opt. Commun. 102 (1993) 277), where the effect of large injected signals on the transient dynamics of the laser has been studied in terms of complex electric field. The analytical results are compared with numerical simulation.

  17. Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.

    PubMed

    Fernández-Gracia, J; Eguíluz, V M; San Miguel, M

    2011-07-01

    We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.

  18. SU-G-201-17: Verification of Dose Distributions From High-Dose-Rate Brachytherapy Ir-192 Source Using a Multiple-Array-Diode-Detector (MapCheck2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpool, K; De La Fuente Herman, T; Ahmad, S

    Purpose: To investigate quantitatively the accuracy of dose distributions for the Ir-192 high-dose-rate (HDR) brachytherapy source calculated by the Brachytherapy-Planning system (BPS) and measured using a multiple-array-diode-detector in a heterogeneous medium. Methods: A two-dimensional diode-array-detector system (MapCheck2) was scanned with a catheter and the CT-images were loaded into the Varian-Brachytherapy-Planning which uses TG-43-formalism for dose calculation. Treatment plans were calculated for different combinations of one dwell-position and varying irradiation times and different-dwell positions and fixed irradiation time with the source placed 12mm from the diode-array plane. The calculated dose distributions were compared to the measured doses with MapCheck2 delivered bymore » an Ir-192-source from a Nucletron-Microselectron-V2-remote-after-loader. The linearity of MapCheck2 was tested for a range of dwell-times (2–600 seconds). The angular effect was tested with 30 seconds irradiation delivered to the central-diode and then moving the source away in increments of 10mm. Results: Large differences were found between calculated and measured dose distributions. These differences are mainly due to absence of heterogeneity in the dose calculation and diode-artifacts in the measurements. The dose differences between measured and calculated due to heterogeneity ranged from 5%–12% depending on the position of the source relative to the diodes in MapCheck2 and different heterogeneities in the beam path. The linearity test of the diode-detector showed 3.98%, 2.61%, and 2.27% over-response at short irradiation times of 2, 5, and 10 seconds, respectively, and within 2% for 20 to 600 seconds (p-value=0.05) which depends strongly on MapCheck2 noise. The angular dependency was more pronounced at acute angles ranging up to 34% at 5.7 degrees. Conclusion: Large deviations between measured and calculated dose distributions for HDR-brachytherapy with Ir-192 may be improved when considering medium heterogeneity and dose-artifact of the diodes. This study demonstrates that multiple-array-diode-detectors provide practical and accurate dosimeter to verify doses delivered from the brachytherapy Ir-192-source.« less

  19. The Distribution of Chromosomal Aberrations in Human Cells Predicted by a Generalized Time-Dependent Model of Radiation-Induced Formation of Aberrations

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem L.; George, K.; Cucinotta, F. A.

    2011-01-01

    New experimental data show how chromosomal aberrations for low- and high-LET radiation are dependent on DSB repair deficiencies in wild-type, AT and NBS cells. We simulated the development of chromosomal aberrations in these cells lines in a stochastic track-structure-dependent model, in which different cells have different kinetics of DSB repair. We updated a previously formulated model of chromosomal aberrations, which was based on a stochastic Monte Carlo approach, to consider the time-dependence of DSB rejoining. The previous version of the model had an assumption that all DSBs would rejoin, and therefore we called it a time-independent model. The chromosomal-aberrations model takes into account the DNA and track structure for low- and high-LET radiations, and provides an explanation and prediction of the statistics of rare and more complex aberrations. We compared the program-simulated kinetics of DSB rejoining to the experimentally-derived bimodal exponential curves of the DSB kinetics. We scored the formation of translocations, dicentrics, acentric and centric rings, deletions, and inversions. The fraction of DSBs participating in aberrations was studied in relation to the rejoining time. Comparisons of simulated dose dependence for simple aberrations to the experimental dose-dependence for HF19, AT and NBS cells will be made.

  20. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  1. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  2. QPROP: A Schrödinger-solver for intense laser atom interaction

    NASA Astrophysics Data System (ADS)

    Bauer, Dieter; Koval, Peter

    2006-03-01

    The QPROP package is presented. QPROP has been developed to study laser-atom interaction in the nonperturbative regime where nonlinear phenomena such as above-threshold ionization, high order harmonic generation, and dynamic stabilization are known to occur. In the nonrelativistic regime and within the single active electron approximation, these phenomena can be studied with QPROP in the most rigorous way by solving the time-dependent Schrödinger equation in three spatial dimensions. Because QPROP is optimized for the study of quantum systems that are spherically symmetric in their initial, unperturbed configuration, all wavefunctions are expanded in spherical harmonics. Time-propagation of the wavefunctions is performed using a split-operator approach. Photoelectron spectra are calculated employing a window-operator technique. Besides the solution of the time-dependent Schrödinger equation in single active electron approximation, QPROP allows to study many-electron systems via the solution of the time-dependent Kohn-Sham equations. Program summaryProgram title:QPROP Catalogue number:ADXB Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXB Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Computer on which program has been tested:PC Pentium IV, Athlon Operating system:Linux Program language used:C++ Memory required to execute with typical data:Memory requirements depend on the number of propagated orbitals and on the size of the orbitals. For instance, time-propagation of a hydrogenic wavefunction in the perturbative regime requires about 64 KB RAM (4 radial orbitals with 1000 grid points). Propagation in the strongly nonperturbative regime providing energy spectra up to high energies may need 60 radial orbitals, each with 30000 grid points, i.e. about 30 MB. Examples are given in the article. No. of bits in a word:Real and complex valued numbers of double precision are used No. of lines in distributed program, including test data, etc.:69 995 No. of bytes in distributed program, including test data, etc.: 2 927 567 Peripheral used:Disk for input-output, terminal for interaction with the user CPU time required to execute test data:Execution time depends on the size of the propagated orbitals and the number of time-steps Distribution format:tar.gz Nature of the physical problem:Atoms put into the strong field of modern lasers display a wealth of novel phenomena that are not accessible to conventional perturbation theory where the external field is considered small as compared to inneratomic forces. Hence, the full ab initio solution of the time-dependent Schrödinger equation is desirable but in full dimensionality only feasible for no more than two (active) electrons. If many-electron effects come into play or effective ground state potentials are needed, (time-dependent) density functional theory may be employed. QPROP aims at providing tools for (i) the time-propagation of the wavefunction according to the time-dependent Schrödinger equation, (ii) the time-propagation of Kohn-Sham orbitals according to the time-dependent Kohn-Sham equations, and (iii) the energy-analysis of the final one-electron wavefunction (or the Kohn-Sham orbitals). Method of solution:An expansion of the wavefunction in spherical harmonics leads to a coupled set of equations for the radial wavefunctions. These radial wavefunctions are propagated using a split-operator technique and the Crank-Nicolson approximation for the short-time propagator. The initial ground state is obtained via imaginary time-propagation for spherically symmetric (but otherwise arbitrary) effective potentials. Excited states can be obtained through the combination of imaginary time-propagation and orthogonalization. For the Kohn-Sham scheme a multipole expansion of the effective potential is employed. Wavefunctions can be analyzed using the window-operator technique, facilitating the calculation of electron spectra, either angular-resolved or integrated Restrictions onto the complexity of the problem:The coupling of the atom to the external field is treated in dipole approximation. The time-dependent Schrödinger solver is restricted to the treatment of a single active electron. As concerns the time-dependent density functional mode of QPROP, the Hartree-potential (accounting for the classical electron-electron repulsion) is expanded up to the quadrupole. Only the monopole term of the Krieger-Li-Iafrate exchange potential is currently implemented. As in any nontrivial optimization problem, convergence to the optimal many-electron state (i.e. the ground state) is not automatically guaranteed External routines/libraries used:The program uses the well established libraries BLAS, LAPACK, and F2C

  3. Autonomous Decentralized Voltage Profile Control of Super Distributed Energy System using Multi-agent Technology

    NASA Astrophysics Data System (ADS)

    Tsuji, Takao; Hara, Ryoichi; Oyama, Tsutomu; Yasuda, Keiichiro

    A super distributed energy system is a future energy system in which the large part of its demand is fed by a huge number of distributed generators. At one time some nodes in the super distributed energy system behave as load, however, at other times they behave as generator - the characteristic of each node depends on the customers' decision. In such situation, it is very difficult to regulate voltage profile over the system due to the complexity of power flows. This paper proposes a novel control method of distributed generators that can achieve the autonomous decentralized voltage profile regulation by using multi-agent technology. The proposed multi-agent system employs two types of agent; a control agent and a mobile agent. Control agents generate or consume reactive power to regulate the voltage profile of neighboring nodes and mobile agents transmit the information necessary for VQ-control among the control agents. The proposed control method is tested through numerical simulations.

  4. [Hazard function and life table: an introduction to the failure time analysis].

    PubMed

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  5. Fourier-transform-based model for carrier transport in semiconductor heterostructures: Longitudinal optical phonon scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lü, X.; Schrottke, L.; Grahn, H. T.

    We present scattering rates for electrons at longitudinal optical phonons within a model completely formulated in the Fourier domain. The total intersubband scattering rates are obtained by averaging over the intrasubband electron distributions. The rates consist of the Fourier components of the electron wave functions and a contribution depending only on the intersubband energies and the intrasubband carrier distributions. The energy-dependent part can be reproduced by a rational function, which allows for the separation of the scattering rates into a dipole-like contribution, an overlap-like contribution, and a contribution which can be neglected for low and intermediate carrier densities of themore » initial subband. For a balance between accuracy and computation time, the number of Fourier components can be adjusted. This approach facilitates an efficient design of complex heterostructures with realistic, temperature- and carrier density-dependent rates.« less

  6. Ellipticity-dependent of multiple ionisation methyl iodide cluster using 532 nm nanosecond laser

    NASA Astrophysics Data System (ADS)

    Tang, Bin; Zhao, Wuduo; Wang, Weiguo; Hua, Lei; Chen, Ping; Hou, Keyong; Huang, Yunguang; Li, Haiyang

    2016-03-01

    The dependence of multiply charged ions on laser ellipticity in methyl iodide clusters with 532 nm nanosecond laser was measured using a time-of-flight mass spectrometer. The intensities of multiply charged ions Iq+(q = 2-4) with circularly polarised laser pulse were clearly higher than those with linearly polarised laser pulse but the intensity of single charged ions I+ was inverse. And the dependences of ions on the optical polarisation state were investigated and a flower petal and square distribution for single charged ions (I+, C+) and multiply charged ions (I2+, I3+, I4+, C2+) were observed, respectively. A theoretical calculation was also proposed to simulate the distributions of ions and theoretical results fitted well with the experimental ones. It indicated that the high multiphoton ionisation probability in the initial stage would result in the disintegration of big clusters into small ones and suppress the production of multiply charged ions.

  7. Temporal coding of reward-guided choice in the posterior parietal cortex

    PubMed Central

    Hawellek, David J.; Wong, Yan T.; Pesaran, Bijan

    2016-01-01

    Making a decision involves computations across distributed cortical and subcortical networks. How such distributed processing is performed remains unclear. We test how the encoding of choice in a key decision-making node, the posterior parietal cortex (PPC), depends on the temporal structure of the surrounding population activity. We recorded spiking and local field potential (LFP) activity in the PPC while two rhesus macaques performed a decision-making task. We quantified the mutual information that neurons carried about an upcoming choice and its dependence on LFP activity. The spiking of PPC neurons was correlated with LFP phases at three distinct time scales in the theta, beta, and gamma frequency bands. Importantly, activity at these time scales encoded upcoming decisions differently. Choice information contained in neural firing varied with the phase of beta and gamma activity. For gamma activity, maximum choice information occurred at the same phase as the maximum spike count. However, for beta activity, choice information and spike count were greatest at different phases. In contrast, theta activity did not modulate the encoding properties of PPC units directly but was correlated with beta and gamma activity through cross-frequency coupling. We propose that the relative timing of local spiking and choice information reveals temporal reference frames for computations in either local or large-scale decision networks. Differences between the timing of task information and activity patterns may be a general signature of distributed processing across large-scale networks. PMID:27821752

  8. On geological interpretations of crystal size distributions: Constant vs. proportionate growth

    USGS Publications Warehouse

    Eberl, D.D.; Kile, D.E.; Drits, V.A.

    2002-01-01

    Geological interpretations of crystal size distributions (CSDs) depend on understanding the crystal growth laws that generated the distributions. Most descriptions of crystal growth, including a population-balance modeling equation that is widely used in petrology, assume that crystal growth rates at any particular time are identical for all crystals, and, therefore, independent of crystal size. This type of growth under constant conditions can be modeled by adding a constant length to the diameter of each crystal for each time step. This growth equation is unlikely to be correct for most mineral systems because it neither generates nor maintains the shapes of lognormal CSDs, which are among the most common types of CSDs observed in rocks. In an alternative approach, size-dependent (proportionate) growth is modeled approximately by multiplying the size of each crystal by a factor, an operation that maintains CSD shape and variance, and which is in accord with calcite growth experiments. The latter growth law can be obtained during supply controlled growth using a modified version of the Law of Proportionate Effect (LPE), an equation that simulates the reaction path followed by a CSD shape as mean size increases.

  9. A two-step method for developing a control rod program for boiling water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less

  10. How electronic dynamics with Pauli exclusion produces Fermi-Dirac statistics.

    PubMed

    Nguyen, Triet S; Nanguneri, Ravindra; Parkhill, John

    2015-04-07

    It is important that any dynamics method approaches the correct population distribution at long times. In this paper, we derive a one-body reduced density matrix dynamics for electrons in energetic contact with a bath. We obtain a remarkable equation of motion which shows that in order to reach equilibrium properly, rates of electron transitions depend on the density matrix. Even though the bath drives the electrons towards a Boltzmann distribution, hole blocking factors in our equation of motion cause the electronic populations to relax to a Fermi-Dirac distribution. These factors are an old concept, but we show how they can be derived with a combination of time-dependent perturbation theory and the extended normal ordering of Mukherjee and Kutzelnigg for a general electronic state. The resulting non-equilibrium kinetic equations generalize the usual Redfield theory to many-electron systems, while ensuring that the orbital occupations remain between zero and one. In numerical applications of our equations, we show that relaxation rates of molecules are not constant because of the blocking effect. Other applications to model atomic chains are also presented which highlight the importance of treating both dephasing and relaxation. Finally, we show how the bath localizes the electron density matrix.

  11. Quantum dynamical simulations of local field enhancement in metal nanoparticles.

    PubMed

    Negre, Christian F A; Perassi, Eduardo M; Coronado, Eduardo A; Sánchez, Cristián G

    2013-03-27

    Field enhancements (Γ) around small Ag nanoparticles (NPs) are calculated using a quantum dynamical simulation formalism and the results are compared with electrodynamic simulations using the discrete dipole approximation (DDA) in order to address the important issue of the intrinsic atomistic structure of NPs. Quite remarkably, in both quantum and classical approaches the highest values of Γ are located in the same regions around single NPs. However, by introducing a complete atomistic description of the metallic NPs in optical simulations, a different pattern of the Γ distribution is obtained. Knowing the correct pattern of the Γ distribution around NPs is crucial for understanding the spectroscopic features of molecules inside hot spots. The enhancement produced by surface plasmon coupling is studied by using both approaches in NP dimers for different inter-particle distances. The results show that the trend of the variation of Γ versus inter-particle distance is different for classical and quantum simulations. This difference is explained in terms of a charge transfer mechanism that cannot be obtained with classical electrodynamics. Finally, time dependent distribution of the enhancement factor is simulated by introducing a time dependent field perturbation into the Hamiltonian, allowing an assessment of the localized surface plasmon resonance quantum dynamics.

  12. Thermal ion heating in the vicinity of the plasmapause: A Dynamics Explorer guest investigation

    NASA Technical Reports Server (NTRS)

    Comfort, R. H.

    1986-01-01

    The ion thermal structure of the plasmasphere was investigated in a series of experiments. It appears that energy may be generally available to ion and electrons in the vinicity of the plasmapause from Coulomb interactions between ambient thermal plasma and low energy ring current and suprathermal ions, particularly O+. The amount of energy transferred depends on the densities and energies of each of the components. The spatial distribution of heating in turn depends critically on the spatial distribution of the different populations, especially on the density gradients. The spatial distribution of the thermal plasma is found to vary significantly on a diurnal time scale and is complicated by the plasmasphere erosion and refilling processes associated with magnetic activity and its aftermath. Thermal ion composition also appears to be influenced by the heating taking place, often increasing the heavy ion population in the vicinity of the plasmapause. The observations of equatorial heating near the plasmapause in the presence of equatorial noise also raise the likelihood of a wave source of energy. It is not unreasonable to expect that both particle and wave heat sources are significant, although not necessarily at the same times and places.

  13. Maximum entropy principal for transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilich, F.; Da Silva, R.

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less

  14. Diode-Laser Absorption Sensor for Line-of-Sight Gas Temperature Distributions

    NASA Astrophysics Data System (ADS)

    Sanders, Scott T.; Wang, Jian; Jeffries, Jay B.; Hanson, Ronald K.

    2001-08-01

    Line-of-sight diode-laser absorption techniques have been extended to enable temperature measurements in nonuniform-property flows. The sensing strategy for such flows exploits the broad wavelength-scanning abilities ( >1.7 nm ~ 30 cm-1 ) of a vertical cavity surface-emitting laser (VCSEL) to interrogate multiple absorption transitions along a single line of sight. To demonstrate the strategy, a VCSEL-based sensor for oxygen gas temperature distributions was developed. A VCSEL beam was directed through paths containing atmospheric-pressure air with known (and relatively simple) temperature distributions in the 200 -700 K range. The VCSEL was scanned over ten transitions in the R branch of the oxygen A band near 760 nm and optionally over six transitions in the P branch. Temperature distribution information can be inferred from these scans because the line strength of each probed transition has a unique temperature dependence; the measurement accuracy and resolution depend on the details of this temperature dependence and on the total number of lines scanned. The performance of the sensing strategy can be optimized and predicted theoretically. Because the sensor exhibits a fast time response ( ~30 ms) and can be adapted to probe a variety of species over a range of temperatures and pressures, it shows promise for industrial application.

  15. Measurement of the Spatial Distribution of Ultracold Cesium Rydberg Atoms by Time-of-Flight Spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Jingkui; Zhang, Linjie; Zhang, Hao; Zhao, Jianming; Jia, Suotang

    2015-09-01

    We prepare nS (n = 49) cesium Rydberg atoms by two-photon excitation in a standard magnetooptical trap to obtain the spatial distribution of the Rydberg atoms by measuring the time-of-flight (TOF) spectra in the case of a low Rydberg density. We analyze the time evolution of the ultracold nS Rydberg atoms distribution by changing the delay time of the pulsed ionization field, defined as the duration from the moment of switching off the excitation lasers to the time of switching on the ionization field. TOF spectra of Rydberg atoms are observed as a function of the delay time and initial Rydberg atomic density. The corresponding full widths at half maximum (FWHMs) are obtained by fitting the spectra with a Gaussian profile. The FWHM decreases with increasing delay time at a relatively high Rydberg atom density (>5 × 107/cm3) because of the decreasing Coulomb interaction between released charges during their flight to the detector. The temperature of the cold atoms is deduced from the dependence of the TOF spectra on the delay time under the condition of low Rydberg atom density.

  16. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so models can be strongly or weakly time-dependent.

  17. Minority games with score-dependent and agent-dependent payoffs

    NASA Astrophysics Data System (ADS)

    Ren, F.; Zheng, B.; Qiu, T.; Trimper, S.

    2006-10-01

    Score-dependent and agent-dependent payoffs of the strategies are introduced into the standard minority game. The intrinsic periodicity is consequently removed, and the stylized facts arise, such as long-range volatility correlations and “fat tails” in the distribution of the returns. The agent dependence of the payoffs is essential in producing the long-range volatility correlations. The new payoffs lead to a better performance in the dynamic behavior nonlocal in time, and can coexist with the inactive strategy. We also observe that the standard deviation σ2/N is significantly reduced, thus the efficiency of the system is distinctly improved. Based on this observation, we give a qualitative explanation for the long-range volatility correlations.

  18. Modeling chloride transport using travel time distributions at Plynlimon, Wales

    NASA Astrophysics Data System (ADS)

    Benettin, Paolo; Kirchner, James W.; Rinaldo, Andrea; Botter, Gianluca

    2015-05-01

    Here we present a theoretical interpretation of high-frequency, high-quality tracer time series from the Hafren catchment at Plynlimon in mid-Wales. We make use of the formulation of transport by travel time distributions to model chloride transport originating from atmospheric deposition and compute catchment-scale travel time distributions. The relevance of the approach lies in the explanatory power of the chosen tools, particularly to highlight hydrologic processes otherwise clouded by the integrated nature of the measured outflux signal. The analysis reveals the key role of residual storages that are poorly visible in the hydrological response, but are shown to strongly affect water quality dynamics. A significant accuracy in reproducing data is shown by our calibrated model. A detailed representation of catchment-scale travel time distributions has been derived, including the time evolution of the overall dispersion processes (which can be expressed in terms of time-varying storage sampling functions). Mean computed travel times span a broad range of values (from 80 to 800 days) depending on the catchment state. Results also suggest that, in the average, discharge waters are younger than storage water. The model proves able to capture high-frequency fluctuations in the measured chloride concentrations, which are broadly explained by the sharp transition between groundwaters and faster flows originating from topsoil layers. This article was corrected on 22 JUN 2015. See the end of the full text for details.

  19. Optical holography applications for the zero-g Atmospheric Cloud Physics Laboratory

    NASA Technical Reports Server (NTRS)

    Kurtz, R. L.

    1974-01-01

    A complete description of holography is provided, both for the time-dependent case of moving scene holography and for the time-independent case of stationary holography. Further, a specific holographic arrangement for application to the detection of particle size distribution in an atmospheric simulation cloud chamber. In this chamber particle growth rate is investigated; therefore, the proposed holographic system must capture continuous particle motion in real time. Such a system is described.

  20. Finite element calculations of the time dependent thermal fluxes in the laser-heated diamond anvil cell

    NASA Astrophysics Data System (ADS)

    Montoya, Javier A.; Goncharov, Alexander F.

    2012-06-01

    The time-dependent temperature distribution in the laser-heated diamond anvil cell (DAC) is examined using finite element simulations. Calculations are carried out for the practically important case of a surface-absorbing metallic plate (coupler) surrounded by a thermally insulating transparent medium. The time scales of the heat transfer in the DAC cavity are found to be typically on the order of tens of microseconds depending on the geometrical and thermochemical parameters of the constituent materials. The use of much shorter laser pulses (e.g., on the order of tens of nanoseconds) creates sharp radial temperature gradients, which result in a very intense and abrupt axial conductive heat transfer that exceeds the radiative heat transfer by several orders of magnitude in the practically usable temperature range (<12 000 K). In contrast, the use of laser pulses with several μs duration provides sufficiently uniform spatial heating conditions suitable for studying the bulk sample. The effect of the latent heat of melting on the temperature distribution has been examined in the case of iron and hydrogen for both pulsed and continuous laser heating. The observed anomalies in temperature-laser power dependencies cannot be due to latent heat effects only. Finally, we examine the applicability of a modification to the plate geometry Ångström method for measurements of the thermal diffusivity in the DAC. The calculations show substantial effects of the thermochemical parameters of the insulating medium on the amplitude change and phase shift between the surface temperature variations of the front and back of the sample, which makes this method dependent on the precise knowledge of the properties of the medium.

  1. Probability distribution of financial returns in a model of multiplicative Brownian motion with stochastic diffusion coefficient

    NASA Astrophysics Data System (ADS)

    Silva, Antonio

    2005-03-01

    It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225

  2. Time-dependent Ionization in a Steady Flow in an MHD Model of the Solar Corona and Wind

    NASA Astrophysics Data System (ADS)

    Shen, Chengcai; Raymond, John C.; Mikić, Zoran; Linker, Jon A.; Reeves, Katharine K.; Murphy, Nicholas A.

    2017-11-01

    Time-dependent ionization is important for diagnostics of coronal streamers and pseudostreamers. We describe time-dependent ionization calculations for a three-dimensional magnetohydrodynamic (MHD) model of the solar corona and inner heliosphere. We analyze how non-equilibrium ionization (NEI) influences emission from a pseudostreamer during the Whole Sun Month interval (Carrington rotation CR1913, 1996 August 22 to September 18). We use a time-dependent code to calculate NEI states, based on the plasma temperature, density, velocity, and magnetic field in the MHD model, to obtain the synthetic emissivities and predict the intensities of the Lyα, O VI, Mg x, and Si xii emission lines observed by the SOHO/Ultraviolet Coronagraph Spectrometer (UVCS). At low coronal heights, the predicted intensity profiles of both Lyα and O VI lines match UVCS observations well, but the Mg x and Si xii emission are predicted to be too bright. At larger heights, the O VI and Mg x lines are predicted to be brighter for NEI than equilibrium ionization around this pseudostreamer, and Si xii is predicted to be fainter for NEI cases. The differences of predicted UVCS intensities between NEI and equilibrium ionization are around a factor of 2, but neither matches the observed intensity distributions along the full length of the UVCS slit. Variations in elemental abundances in closed field regions due to the gravitational settling and the FIP effect may significantly contribute to the predicted uncertainty. The assumption of Maxwellian electron distributions and errors in the magnetic field on the solar surface may also have notable effects on the mismatch between observations and model predictions.

  3. Relation between germination and mycelium growth of individual fungal spores.

    PubMed

    Gougouli, Maria; Koutsoumanis, Konstantinos P

    2013-02-15

    The relation between germination time and lag time of mycelium growth of individual spores was studied by combining microscopic and macroscopic techniques. The radial growth of a large number (100-200) of Penicillium expansum and Aspergillus niger mycelia originating from single spores was monitored macroscopically at isothermal conditions ranging from 0 to 30°C and 10 to 41.5°C, respectively. The radial growth curve for each mycelium was fitted to a linear model for the estimation of mycelium lag time. The results showed that the lag time varied significantly among single spores. The cumulative frequency distributions of the lag times were fitted to the modified Gompertz model and compared with the respective distributions for the germination time, which were obtained microscopically. The distributions of the measured mycelium lag time were found to be similar to the germination time distributions under the same conditions but shifted in time with the lag times showing a significant delay compared to germination times. A numerical comparison was also performed based on the distribution parameters λ(m) and λ(g), which indicate the time required from the spores to start the germination process and the completion of the lag phase, respectively. The relative differences %(λ(m)-λ(g))/λ(m) were not found to be significantly affected by temperatures tested with mean values of 72.5±5.1 and 60.7±2.1 for P. expansum for A. niger, respectively. In order to investigate the source of the above difference, a time-lapse microscopy method was developed providing videos with the behavior of single fungal spore from germination until mycelium formation. The distances of the apexes of the first germ tubes that emerged from the swollen spore were measured in each frame of the videos and these data were expressed as a function of time. The results showed that in the early hyphal development, the measured radii appear to increase exponentially, until a certain time, where growth becomes linear. The two phases of hyphal development can explain the difference between germination and lag time. Since the lag time is estimated from the extrapolation of the regression line of the linear part of the graph only, its value is significantly higher than the germination time, t(G). The relation of germination and lag time was further investigated by comparing their temperature dependence using the Cardinal Model with Inflection. The estimated values of the cardinal parameters (T(min), T(opt), and T(max)) for 1/λ(g) were found to be very close to the respective values for 1/λ(m), indicating similar temperature dependence between them. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Time-dependent London approach: Dissipation due to out-of-core normal excitations by moving vortices

    DOE PAGES

    Kogan, V. G.

    2018-03-19

    The dissipative currents due to normal excitations are included in the London description. The resulting time-dependent London equations are solved for a moving vortex and a moving vortex lattice. It is shown that the field distribution of a moving vortex loses its cylindrical symmetry. It experiences contraction that is stronger in the direction of the motion than in the direction normal to the velocity v. The London contribution of normal currents to dissipation is small relative to the Bardeen-Stephen core dissipation at small velocities, but it approaches the latter at high velocities, where this contribution is no longer proportional tomore » v 2. Here, to minimize the London contribution to dissipation, the vortex lattice is oriented so as to have one of the unit cell vectors along the velocity. This effect is seen in experiments and predicted within the time-dependent Ginzburg-Landau theory.« less

  5. Time-dependent London approach: Dissipation due to out-of-core normal excitations by moving vortices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kogan, V. G.

    The dissipative currents due to normal excitations are included in the London description. The resulting time-dependent London equations are solved for a moving vortex and a moving vortex lattice. It is shown that the field distribution of a moving vortex loses its cylindrical symmetry. It experiences contraction that is stronger in the direction of the motion than in the direction normal to the velocity v. The London contribution of normal currents to dissipation is small relative to the Bardeen-Stephen core dissipation at small velocities, but it approaches the latter at high velocities, where this contribution is no longer proportional tomore » v 2. Here, to minimize the London contribution to dissipation, the vortex lattice is oriented so as to have one of the unit cell vectors along the velocity. This effect is seen in experiments and predicted within the time-dependent Ginzburg-Landau theory.« less

  6. Time-dependent London approach: Dissipation due to out-of-core normal excitations by moving vortices

    NASA Astrophysics Data System (ADS)

    Kogan, V. G.

    2018-03-01

    The dissipative currents due to normal excitations are included in the London description. The resulting time-dependent London equations are solved for a moving vortex and a moving vortex lattice. It is shown that the field distribution of a moving vortex loses its cylindrical symmetry. It experiences contraction that is stronger in the direction of the motion than in the direction normal to the velocity v . The London contribution of normal currents to dissipation is small relative to the Bardeen-Stephen core dissipation at small velocities, but it approaches the latter at high velocities, where this contribution is no longer proportional to v2. To minimize the London contribution to dissipation, the vortex lattice is oriented so as to have one of the unit cell vectors along the velocity. This effect is seen in experiments and predicted within the time-dependent Ginzburg-Landau theory.

  7. Local time dependence of turbulent magnetic fields in Saturn's magnetodisc

    NASA Astrophysics Data System (ADS)

    Kaminker, V.; Delamere, P. A.; Ng, C. S.; Dennis, T.; Otto, A.; Ma, X.

    2017-04-01

    Net plasma transport in magnetodiscs around giant planets is outward. Observations of plasma temperature have shown that the expanding plasma is heating nonadiabatically during this process. Turbulence has been suggested as a source of heating. However, the mechanism and distribution of magnetic fluctuations in giant magnetospheres are poorly understood. In this study we attempt to quantify the radial and local time dependence of fluctuating magnetic field signatures that are suggestive of turbulence, quantifying the fluctuations in terms of a plasma heating rate density. In addition, the inferred heating rate density is correlated with magnetic field configurations that include azimuthal bend forward/back and magnitude of the equatorial normal component of magnetic field relative to the dipole. We find a significant local time dependence in magnetic fluctuations that is consistent with flux transport triggered in the subsolar and dusk sectors due to magnetodisc reconnection.

  8. Dynamics of entanglement and uncertainty relation in coupled harmonic oscillator system: exact results

    NASA Astrophysics Data System (ADS)

    Park, DaeKil

    2018-06-01

    The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.

  9. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

    PubMed Central

    Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang

    2013-01-01

    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941

  10. The Effect of the Underlying Distribution in Hurst Exponent Estimation

    PubMed Central

    Sánchez, Miguel Ángel; Trinidad, Juan E.; García, José; Fernández, Manuel

    2015-01-01

    In this paper, a heavy-tailed distribution approach is considered in order to explore the behavior of actual financial time series. We show that this kind of distribution allows to properly fit the empirical distribution of the stocks from S&P500 index. In addition to that, we explain in detail why the underlying distribution of the random process under study should be taken into account before using its self-similarity exponent as a reliable tool to state whether that financial series displays long-range dependence or not. Finally, we show that, under this model, no stocks from S&P500 index show persistent memory, whereas some of them do present anti-persistent memory and most of them present no memory at all. PMID:26020942

  11. Radio pulsar glitches as a state-dependent Poisson process

    NASA Astrophysics Data System (ADS)

    Fulgenzi, W.; Melatos, A.; Hughes, B. D.

    2017-10-01

    Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.

  12. Spinning Disc Technology – Residence Time Distribution and Efficiency in Textile Wastewater Treatment Application

    NASA Astrophysics Data System (ADS)

    Iacob Tudose, E. T.; Zaharia, C.

    2018-06-01

    The spinning disc (SD) technology has received increased attention in the last years due to its enhanced fluid flow features resulting in improved property transfers. The actual study focuses on characterization of the flow within a spinning disc system based on experimental data used to establish the residence time distribution (RTD) and its dependence on the feeding liquid flowrate and the disc rotational speed. To obtain these data, an inert tracer (sodium chloride) was injected as a pulse input in the liquid stream entering the disc and the salt concentration of the liquid leaving the disc was continuously recorded. The obtained data indicate that an increase in the liquid flowrate from 10 L/h to 30 L/h determines a narrower RTD function. Also, at rotational speed of 200 rpm, the residence time distribution is broader than that for 500 rpm and 800 rpm. The RTD data suggest that depending on the needed flow characteristics, one can choose a certain flowrate and rotational speed domain for its application. Also, the SD technology was used to process textile wastewater treated with bentonite (as both coagulation and discoloration agent) in order to investigate whether the quality indicators such as the total suspended solid content, turbidity and discoloration, can be improved. The experimental results are promising since the discoloration and the removals of suspended solids attained values of over 40%, and respectively, 50 %, depending on the effluent flowrate (10 l/h and 30 L/h), and the disc rotational speed (200 rpm, 550 rpm and 850 rpm) without any other addition of chemicals, or initiation of other simultaneous treatment processes (e.g., advanced oxidative, or reductive, or biochemical processes). This recommends spinning disc technology as a suitable and promising tool to improve different wastewater characteristics.

  13. Time-dependent mobility and recombination of the photoinduced charge carriers in conjugated polymer/fullerene bulk heterojunction solar cells

    NASA Astrophysics Data System (ADS)

    Mozer, A. J.; Dennler, G.; Sariciftci, N. S.; Westerling, M.; Pivrikas, A.; Österbacka, R.; Juška, G.

    2005-07-01

    Time-dependent mobility and recombination in the blend of poly[2-methoxy-5-(3,7-dimethyloctyloxy)-phenylene vinylene] (MDMO-PPV) and 1-(3-methoxycarbonyl)propyl-1-phenyl-(6,6)- C61 (PCBM) is studied simultaneously using the photoinduced charge carrier extraction by linearly increasing voltage technique. The charge carriers are photogenerated by a strongly absorbed, 3 ns laser flash, and extracted by the application of a reverse bias voltage pulse after an adjustable delay time (tdel) . It is found that the mobility of the extracted charge carriers decreases with increasing delay time, especially shortly after photoexcitation. The time-dependent mobility μ(t) is attributed to the energy relaxation of the charge carriers towards the tail states of the density of states distribution. A model based on a dispersive bimolecular recombination is formulated, which properly describes the concentration decay of the extracted charge carriers at all measured temperatures and concentrations. The calculated bimolecular recombination coefficient β(t) is also found to be time-dependent exhibiting a power law dependence as β(t)=β0t-(1-γ) with increasing slope (1-γ) with decreasing temperatures. The temperature dependence study reveals that both the mobility and recombination of the photogenerated charge carriers are thermally activated processes with activation energy in the range of 0.1 eV. Finally, the direct comparison of μ(t) and β(t) shows that the recombination of the long-lived charge carriers is controlled by diffusion.

  14. Recent Simulation Results on Ring Current Dynamics Using the Comprehensive Ring Current Model

    NASA Technical Reports Server (NTRS)

    Zheng, Yihua; Zaharia, Sorin G.; Lui, Anthony T. Y.; Fok, Mei-Ching

    2010-01-01

    Plasma sheet conditions and electromagnetic field configurations are both crucial in determining ring current evolution and connection to the ionosphere. In this presentation, we investigate how different conditions of plasma sheet distribution affect ring current properties. Results include comparative studies in 1) varying the radial distance of the plasma sheet boundary; 2) varying local time distribution of the source population; 3) varying the source spectra. Our results show that a source located farther away leads to a stronger ring current than a source that is closer to the Earth. Local time distribution of the source plays an important role in determining both the radial and azimuthal (local time) location of the ring current peak pressure. We found that post-midnight source locations generally lead to a stronger ring current. This finding is in agreement with Lavraud et al.. However, our results do not exhibit any simple dependence of the local time distribution of the peak ring current (within the lower energy range) on the local time distribution of the source, as suggested by Lavraud et al. [2008]. In addition, we will show how different specifications of the magnetic field in the simulation domain affect ring current dynamics in reference to the 20 November 2007 storm, which include initial results on coupling the CRCM with a three-dimensional (3-D) plasma force balance code to achieve self-consistency in the magnetic field.

  15. The stochastic spectator

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.; Vennin, Vincent; Byrnes, Christian T.; Torrado, Jesús; Wands, David

    2017-10-01

    We study the stochastic distribution of spectator fields predicted in different slow-roll inflation backgrounds. Spectator fields have a negligible energy density during inflation but may play an important dynamical role later, even giving rise to primordial density perturbations within our observational horizon today. During de-Sitter expansion there is an equilibrium solution for the spectator field which is often used to estimate the stochastic distribution during slow-roll inflation. However slow roll only requires that the Hubble rate varies slowly compared to the Hubble time, while the time taken for the stochastic distribution to evolve to the de-Sitter equilibrium solution can be much longer than a Hubble time. We study both chaotic (monomial) and plateau inflaton potentials, with quadratic, quartic and axionic spectator fields. We give an adiabaticity condition for the spectator field distribution to relax to the de-Sitter equilibrium, and find that the de-Sitter approximation is never a reliable estimate for the typical distribution at the end of inflation for a quadratic spectator during monomial inflation. The existence of an adiabatic regime at early times can erase the dependence on initial conditions of the final distribution of field values. In these cases, spectator fields acquire sub-Planckian expectation values. Otherwise spectator fields may acquire much larger field displacements than suggested by the de-Sitter equilibrium solution. We quantify the information about initial conditions that can be obtained from the final field distribution. Our results may have important consequences for the viability of spectator models for the origin of structure, such as the simplest curvaton models.

  16. An Army-Centric System of Systems Analysis (SoSA) Definition

    DTIC Science & Technology

    2011-02-01

    1994, 19, 49–74. 34. Suzuki, K.; Ikegami , T . Homeodynamics in the Game of Life. In Artificial Life XI: Proceedings of the Eleventh International...insect displacement as a function of sampling time. (b) The same dataset displaying displacement at time t versus displacement at time t +  t ...probability distribution of x(ti), x( t i + 1), x( t i + 2), …, x( t i + m - 1) is dependent upon the value of ti (21). Similarly, a discrete time series

  17. Broadband spectral analysis of non-Debye dielectric relaxation in percolating heterostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuncer, Enis; Bellatar, J; Achour, M E

    2011-01-01

    In this study, the main features of dielectric relaxation in carbon black epoxy composites are discussed using several types of complementary modelling (i.e., the Cole-Cole phenomenological equation, Jonscher s universal dielectric response, and an approach that relies on a continuous distribution of relaxation times). These methods of characterizing the relaxation were conducted below Tg. Through the numerical model we can obtain the characteristic effective relaxation time and exponents straightforwardly. However, the true relaxation spectrum can be obtained from the distribution of relaxation times calculated from the complex dielectric permittivity. Over the compositional range explored, relaxation occurs by a Vogel-Tammam-Fulcher-like temperaturemore » dependence within the limits of experimental accuracy.« less

  18. Online compensation for target motion with scanned particle beams: simulation environment.

    PubMed

    Li, Qiang; Groezinger, Sven Oliver; Haberer, Thomas; Rietzel, Eike; Kraft, Gerhard

    2004-07-21

    Target motion is one of the major limitations of each high precision radiation therapy. Using advanced active beam delivery techniques, such as the magnetic raster scanning system for particle irradiation, the interplay between time-dependent beam and target position heavily distorts the applied dose distribution. This paper presents a simulation environment in which the time-dependent effect of target motion on heavy-ion irradiation can be calculated with dynamically scanned ion beams. In an extension of the existing treatment planning software for ion irradiation of static targets (TRiP) at GSI, the expected dose distribution is calculated as the sum of several sub-distributions for single target motion states. To investigate active compensation for target motion by adapting the position of the therapeutic beam during irradiation, the planned beam positions can be altered during the calculation. Applying realistic parameters to the planned motion-compensation methods at GSI, the effect of target motion on the expected dose uniformity can be simulated for different target configurations and motion conditions. For the dynamic dose calculation, experimentally measured profiles of the beam extraction in time were used. Initial simulations show the feasibility and consistency of an active motion compensation with the magnetic scanning system and reveal some strategies to improve the dose homogeneity inside the moving target. The simulation environment presented here provides an effective means for evaluating the dose distribution for a moving target volume with and without motion compensation. It contributes a substantial basis for the experimental research on the irradiation of moving target volumes with scanned ion beams at GSI which will be presented in upcoming papers.

  19. Competitive or weak cooperative stochastic Lotka-Volterra systems conditioned on non-extinction.

    PubMed

    Cattiaux, Patrick; Méléard, Sylvie

    2010-06-01

    We are interested in the long time behavior of a two-type density-dependent biological population conditioned on non-extinction, in both cases of competition or weak cooperation between the two species. This population is described by a stochastic Lotka-Volterra system, obtained as limit of renormalized interacting birth and death processes. The weak cooperation assumption allows the system not to blow up. We study the existence and uniqueness of a quasi-stationary distribution, that is convergence to equilibrium conditioned on non-extinction. To this aim we generalize in two-dimensions spectral tools developed for one-dimensional generalized Feller diffusion processes. The existence proof of a quasi-stationary distribution is reduced to the one for a d-dimensional Kolmogorov diffusion process under a symmetry assumption. The symmetry we need is satisfied under a local balance condition relying the ecological rates. A novelty is the outlined relation between the uniqueness of the quasi-stationary distribution and the ultracontractivity of the killed semi-group. By a comparison between the killing rates for the populations of each type and the one of the global population, we show that the quasi-stationary distribution can be either supported by individuals of one (the strongest one) type or supported by individuals of the two types. We thus highlight two different long time behaviors depending on the parameters of the model: either the model exhibits an intermediary time scale for which only one type (the dominant trait) is surviving, or there is a positive probability to have coexistence of the two species.

  20. A near-infrared, optical, and ultraviolet polarimetric and timing investigation of complex equatorial dusty structures

    NASA Astrophysics Data System (ADS)

    Marin, F.; Rojas Lobos, P. A.; Hameury, J. M.; Goosmann, R. W.

    2018-05-01

    Context. From stars to active galactic nuclei, many astrophysical systems are surrounded by an equatorial distribution of dusty material that is, in a number of cases, spatially unresolved even with cutting edge facilities. Aims: In this paper, we investigate if and how one can determine the unresolved and heterogeneous morphology of dust distribution around a central bright source using time-resolved polarimetric observations. Methods: We used polarized radiative transfer simulations to study a sample of circumnuclear dusty morphologies. We explored a grid of geometrically variable models that are uniform, fragmented, and density stratified in the near-infrared, optical, and ultraviolet bands, and we present their distinctive time-dependent polarimetric signatures. Results: As expected, varying the structure of the obscuring equatorial disk has a deep impact on the inclination-dependent flux, polarization degree and angle, and time lags we observe. We find that stratified media are distinguishable by time-resolved polarimetric observations, and that the expected polarization is much higher in the infrared band than in the ultraviolet. However, because of the physical scales imposed by dust sublimation, the average time lags of months to years between the total and polarized fluxes are important; these time lags lengthens the observational campaigns necessary to break more sophisticated, and therefore also more degenerated, models. In the ultraviolet band, time lags are slightly shorter than in the infrared or optical bands, and, coupled to lower diluting starlight fluxes, time-resolved polarimetry in the UV appears more promising for future campaigns. Conclusions: Equatorial dusty disks differ in terms of inclination-dependent photometric, polarimetric, and timing observables, but only the coupling of these different markers can lead to inclination-independent constraints on the unresolved structures. Even though it is complex and time consuming, polarized reverberation mapping in the ultraviolet-blue band is probably the best technique to rely on in this field.

  1. The solar ionisation rate deduced from Ulysses measurements and its implications to interplanetary Lyman alpha-intensity

    NASA Technical Reports Server (NTRS)

    Summanen, T.; Kyroelae, E.

    1995-01-01

    We have developed a computer code which can be used to study 3-dimensional and time-dependent effects of the solar cycle on the interplanetary (IP) hydrogen distribution. The code is based on the inverted Monte Carlo simulation. In this work we have modelled the temporal behaviour of the solar ionisation rate. We have assumed that during the most of the time of the solar cycle there is an anisotopic latitudinal structure but right at the solar maximum the anisotropy disappears. The effects of this behaviour will be discussed both in regard to the IP hydrogen distribution and IP Lyman a a-intensity.

  2. Skin dosimetry of patients during interventional cardiology procedures in the Czech Republic

    NASA Astrophysics Data System (ADS)

    Sukupova, Lucie; Novak, Leos

    2008-01-01

    The aim of the study is to determine distribution of air kerma-area product, fluoro time and number of frames values for the two most frequent procedures in the interventional cardiology, to reconstruct skin dose distributions for some patients undergoing coronarography and percutaneous transluminal coronary angioplasty procedures. Patient dose data were obtained from X-ray unit dose monitoring software report from one hospital and the reconstructions were performed in MATLAB. Dependence of maximum skin dose on air kerma-area product, fluoro time and number of frames was determined to assess trigger levels of these quantities, which can indicate possible exceeding of the 2 Gy skin dose threshold.

  3. Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications

    NASA Astrophysics Data System (ADS)

    Wang, K.; Lettenmaier, D. P.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.

  4. Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination

    PubMed Central

    Jana, Sumitash; Gopal, Atul

    2016-01-01

    Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements. PMID:27784809

  5. United States Air Force Research Initiation Program for 1987. Volume 1

    DTIC Science & Technology

    1989-04-01

    complexity for analyzing such models depends upon the repair or replace- ment times distributions, the repair policy for damaged components and a...distributions, repair policy for various comDonents and a number of other factors. Problems o interest for such models include the determinations of (a...Thus. some more assumption is needed as to the order in which repair is to be made when more than one component is damaged. We will adopt a policy

  6. Late time behaviors of an inhomogeneous rolling tachyon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, O-Kab; Lee, Chong Oh; Basic Science Research Institute, Chonbuk National University, Chonju 561-756

    2006-06-15

    We study an inhomogeneous decay of an unstable D-brane in the context of Dirac-Born-Infeld (DBI)-type effective action. We consider tachyon and electromagnetic fields with dependence of time and one spatial coordinate, and an exact solution is found under an exponentially decreasing tachyon potential, e{sup -|T|/{radical}}{sup (2)}, which is valid for the description of the late time behavior of an unstable D-brane. Though the obtained solution contains both time and spatial dependence, the corresponding momentum density vanishes over the entire spacetime region. The solution is governed by two parameters. One adjusts the distribution of energy density in the inhomogeneous direction, andmore » the other interpolates between the homogeneous rolling tachyon and static configuration. As time evolves, the energy of the unstable D-brane is converted into the electric flux and tachyon matter.« less

  7. The role of local heterogeneity in transport through steep hillslopes.

    NASA Astrophysics Data System (ADS)

    Fiori, A.; Russo, D.

    2009-04-01

    A stochastic model is developed for the analysis of the travel time distribution in a hillslope. The latter is represented as a system made up from a highly permeable soil underlain by a less permeable subsoil or bedrock. The heterogeneous hydraulic conductivity K is described as a stationary random space function. The travel time distribution is obtained through a stochastic Lagrangian model of transport, after adopting a first order approximation in the logconductivity variance. The results show that the travel time pdf pertaining to the soil is power-law, with exponent variable between -1 and -0.5; the behavior is mainly determined by unsaturated transport. The subsoil is mainly responsible for the tail of the travel time distribution. Analysis of the first and second moments of travel time show that the spreading of solute is controlled by the variations in the flow-paths (geomorphological dispersion), which depend on the hillslope geometry. Conversely, the contribution of the K heterogeneity to spreading appears as less relevant. The model is tested against a detailed three-dimensional numerical simulation with reasonably good agreement.

  8. A quality-of-life-oriented endpoint for comparing therapies.

    PubMed

    Gelber, R D; Gelman, R S; Goldhirsch, A

    1989-09-01

    An endpoint, time without symptoms of disease and toxicity of treatment (TWiST), is defined to provide a single measure of length and quality of survival. Time with subjective side effects of treatment and time with unpleasant symptoms of disease are subtracted from overall survival time to calculate TWiST for each patient. The purpose of this paper is to describe the construction of this endpoint, and to elaborate on its interpretation for patient care decision-making. Estimating the distribution of TWiST using actuarial methods is shown by simulation studies to be biased as a result of induced dependency between TWiST and its censoring distribution. Considering the distribution of TWiST accumulated within a specified time from start of therapy, L, allows one to reduce this bias by substituting estimated TWiST for censored values and provides a method to evaluate the "payback" period for early toxic effects. Quantile distance plots provide graphical representations for treatment comparisons. The analysis of Ludwig Trial III evaluating toxic adjuvant therapies versus a no-treatment control group for postmenopausal women with node-positive breast cancer illustrates the methodology.

  9. Acid Hydrolysis and Molecular Density of Phytoglycogen and Liver Glycogen Helps Understand the Bonding in Glycogen α (Composite) Particles

    PubMed Central

    Powell, Prudence O.; Sullivan, Mitchell A.; Sheehy, Joshua J.; Schulz, Benjamin L.; Warren, Frederick J.; Gilbert, Robert G.

    2015-01-01

    Phytoglycogen (from certain mutant plants) and animal glycogen are highly branched glucose polymers with similarities in structural features and molecular size range. Both appear to form composite α particles from smaller β particles. The molecular size distribution of liver glycogen is bimodal, with distinct α and β components, while that of phytoglycogen is monomodal. This study aims to enhance our understanding of the nature of the link between liver-glycogen β particles resulting in the formation of large α particles. It examines the time evolution of the size distribution of these molecules during acid hydrolysis, and the size dependence of the molecular density of both glucans. The monomodal distribution of phytoglycogen decreases uniformly in time with hydrolysis, while with glycogen, the large particles degrade significantly more quickly. The size dependence of the molecular density shows qualitatively different shapes for these two types of molecules. The data, combined with a quantitative model for the evolution of the distribution during degradation, suggest that the bonding between β into α particles is different between phytoglycogen and liver glycogen, with the formation of a glycosidic linkage for phytoglycogen and a covalent or strong non-covalent linkage, most probably involving a protein, for glycogen as most likely. This finding is of importance for diabetes, where α-particle structure is impaired. PMID:25799321

  10. Random walk in nonhomogeneous environments: A possible approach to human and animal mobility

    NASA Astrophysics Data System (ADS)

    Srokowski, Tomasz

    2017-03-01

    The random walk process in a nonhomogeneous medium, characterized by a Lévy stable distribution of jump length, is discussed. The width depends on a position: either before the jump or after that. In the latter case, the density slope is affected by the variable width and the variance may be finite; then all kinds of the anomalous diffusion are predicted. In the former case, only the time characteristics are sensitive to the variable width. The corresponding Langevin equation with different interpretations of the multiplicative noise is discussed. The dependence of the distribution width on position after jump is interpreted in terms of cognitive abilities and related to such problems as migration in a human population and foraging habits of animals.

  11. Timelike naked singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goswami, Rituparno; Joshi, Pankaj S.; Vaz, Cenalo

    We construct a class of spherically symmetric collapse models in which a naked singularity may develop as the end state of collapse. The matter distribution considered has negative radial and tangential pressures, but the weak energy condition is obeyed throughout. The singularity forms at the center of the collapsing cloud and continues to be visible for a finite time. The duration of visibility depends on the nature of energy distribution. Hence the causal structure of the resulting singularity depends on the nature of the mass function chosen for the cloud. We present a general model in which the naked singularitymore » formed is timelike, neither pointlike nor null. Our work represents a step toward clarifying the necessary conditions for the validity of the Cosmic Censorship Conjecture.« less

  12. Efficiency and cross-correlation in equity market during global financial crisis: Evidence from China

    NASA Astrophysics Data System (ADS)

    Ma, Pengcheng; Li, Daye; Li, Shuo

    2016-02-01

    Using one minute high-frequency data of the Shanghai Composite Index (SHCI) and the Shenzhen Composite Index (SZCI) (2007-2008), we employ the detrended fluctuation analysis (DFA) and the detrended cross correlation analysis (DCCA) with rolling window approach to observe the evolution of market efficiency and cross-correlation in pre-crisis and crisis period. Considering the fat-tail distribution of return time series, statistical test based on shuffling method is conducted to verify the null hypothesis of no long-term dependence. Our empirical research displays three main findings. First Shanghai equity market efficiency deteriorated while Shenzhen equity market efficiency improved with the advent of financial crisis. Second the highly positive dependence between SHCI and SZCI varies with time scale. Third financial crisis saw a significant increase of dependence between SHCI and SZCI at shorter time scales but a lack of significant change at longer time scales, providing evidence of contagion and absence of interdependence during crisis.

  13. Smooth time-dependent receiver operating characteristic curve estimators.

    PubMed

    Martínez-Camblor, Pablo; Pardo-Fernández, Juan Carlos

    2018-03-01

    The receiver operating characteristic curve is a popular graphical method often used to study the diagnostic capacity of continuous (bio)markers. When the considered outcome is a time-dependent variable, two main extensions have been proposed: the cumulative/dynamic receiver operating characteristic curve and the incident/dynamic receiver operating characteristic curve. In both cases, the main problem for developing appropriate estimators is the estimation of the joint distribution of the variables time-to-event and marker. As usual, different approximations lead to different estimators. In this article, the authors explore the use of a bivariate kernel density estimator which accounts for censored observations in the sample and produces smooth estimators of the time-dependent receiver operating characteristic curves. The performance of the resulting cumulative/dynamic and incident/dynamic receiver operating characteristic curves is studied by means of Monte Carlo simulations. Additionally, the influence of the choice of the required smoothing parameters is explored. Finally, two real-applications are considered. An R package is also provided as a complement to this article.

  14. Effects of Initial Particle Distribution on an Energetic Dispersal of Particles

    NASA Astrophysics Data System (ADS)

    Rollin, Bertrand; Ouellet, Frederick; Koneru, Rahul; Garno, Joshua; Durant, Bradford

    2017-11-01

    Accurate predictions of the late time solid particle cloud distribution ensuing an explosive dispersal of particles is an extremely challenging problem for compressible multiphase flow simulations. The source of this difficulty is twofold: (i) The complex sequence of events taking place. Indeed, as the blast wave crosses the surrounding layer of particles, compaction occurs shortly before particles disperse radially at high speed. Then, during the dispersion phase, complex multiphase interactions occurs between particles and detonation products. (ii) Precise characterization of the explosive and particle distribution is virtually impossible. In this numerical experiment, we focus on the sensitivity of late time particle cloud distributions relative to carefully designed initial distributions, assuming the explosive is well described. Using point particle simulations, we study the case of a bed of glass particles surrounding an explosive. Constraining our simulations to relatively low initial volume fractions to prevent reaching of the close packing limit, we seek to describe qualitatively and quantitatively the late time dependency of a solid particle cloud on its distribution before the energy release of an explosive. This work was supported by the U.S. DoE, NNSA, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  15. Squeezing in a 2-D generalized oscillator

    NASA Technical Reports Server (NTRS)

    Castanos, Octavio; Lopez-Pena, Ramon; Manko, Vladimir I.

    1994-01-01

    A two-dimensional generalized oscillator with time-dependent parameters is considered to study the two-mode squeezing phenomena. Specific choices of the parameters are used to determine the dispersion matrix and analytic expressions, in terms of standard hermite polynomials, of the wavefunctions and photon distributions.

  16. State updating of a distributed hydrological model with Ensemble Kalman Filtering: Effects of updating frequency and observation network density on forecast accuracy

    NASA Astrophysics Data System (ADS)

    Rakovec, O.; Weerts, A.; Hazenberg, P.; Torfs, P.; Uijlenhoet, R.

    2012-12-01

    This paper presents a study on the optimal setup for discharge assimilation within a spatially distributed hydrological model (Rakovec et al., 2012a). The Ensemble Kalman filter (EnKF) is employed to update the grid-based distributed states of such an hourly spatially distributed version of the HBV-96 model. By using a physically based model for the routing, the time delay and attenuation are modelled more realistically. The discharge and states at a given time step are assumed to be dependent on the previous time step only (Markov property). Synthetic and real world experiments are carried out for the Upper Ourthe (1600 km2), a relatively quickly responding catchment in the Belgian Ardennes. The uncertain precipitation model forcings were obtained using a time-dependent multivariate spatial conditional simulation method (Rakovec et al., 2012b), which is further made conditional on preceding simulations. We assess the impact on the forecasted discharge of (1) various sets of the spatially distributed discharge gauges and (2) the filtering frequency. The results show that the hydrological forecast at the catchment outlet is improved by assimilating interior gauges. This augmentation of the observation vector improves the forecast more than increasing the updating frequency. In terms of the model states, the EnKF procedure is found to mainly change the pdfs of the two routing model storages, even when the uncertainty in the discharge simulations is smaller than the defined observation uncertainty. Rakovec, O., Weerts, A. H., Hazenberg, P., Torfs, P. J. J. F., and Uijlenhoet, R.: State updating of a distributed hydrological model with Ensemble Kalman Filtering: effects of updating frequency and observation network density on forecast accuracy, Hydrol. Earth Syst. Sci. Discuss., 9, 3961-3999, doi:10.5194/hessd-9-3961-2012, 2012a. Rakovec, O., Hazenberg, P., Torfs, P. J. J. F., Weerts, A. H., and Uijlenhoet, R.: Generating spatial precipitation ensembles: impact of temporal correlation structure, Hydrol. Earth Syst. Sci. Discuss., 9, 3087-3127, doi:10.5194/hessd-9-3087-2012, 2012b.

  17. Supplement to LA-UR-17-21218: Application to SSVD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    We apply the formalism derived in LA-UR-17-21218 to the prescription for an RMI-based self-similar velocity distribution (SSVD) derived by Ham- merberg et al.. We compute analytically the true [mt(t)] and inferred [mi(t)] ejecta mass arriving at the piezoelectric sensor for several shots de- scribed in the literature and compare the results to the data. We nd that while the \\RMI + SSVD" prescription gives rise to decent estimates for the nal accumulated mass at the pin, the time-dependent accumulation rises too sharply and linearly to agree with data. We also compute the time-dependent pressure and voltage at the sensor, andmore » compare the latter to data. The pres- sure does not rise smoothly from zero, instead exhibiting a strong surge as the leading edge of the ejecta cloud arrives, which produces an initial sharp spike in the voltage trace, which is not observed. These inconsistencies result from a discontinuity in the prescribed self-similar velocity distribution at maximum relative velocity.« less

  18. Investigating Long-Range Dependence in American Treasury Bills Variations and Volatilities during Stable and Unstable Periods

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-05-01

    Detrended fluctuation analysis (DFA) is used to examine long-range dependence in variations and volatilities of American treasury bills (TB) during periods of low and high movements in TB rates. Volatility series are estimated by generalized autoregressive conditional heteroskedasticity (GARCH) model under Gaussian, Student, and the generalized error distribution (GED) assumptions. The DFA-based Hurst exponents from 3-month, 6-month, and 1-year TB data indicates that in general the dynamics of the TB variations process is characterized by persistence during stable time period (before 2008 international financial crisis) and anti-persistence during unstable time period (post-2008 international financial crisis). For volatility series, it is found that; for stable period; 3-month volatility process is more likely random, 6-month volatility process is anti-persistent, and 1-year volatility process is persistent. For unstable period, estimation results show that the generating process is persistent for all maturities and for all distributional assumptions.

  19. Spike Train Auto-Structure Impacts Post-Synaptic Firing and Timing-Based Plasticity

    PubMed Central

    Scheller, Bertram; Castellano, Marta; Vicente, Raul; Pipa, Gordon

    2011-01-01

    Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification. PMID:22203800

  20. Putting atomic diffusion theory of magnetic ApBp stars to the test: evaluation of the predictions of time-dependent diffusion models

    NASA Astrophysics Data System (ADS)

    Kochukhov, O.; Ryabchikova, T. A.

    2018-02-01

    A series of recent theoretical atomic diffusion studies has address the challenging problem of predicting inhomogeneous vertical and horizontal chemical element distributions in the atmospheres of magnetic ApBp stars. Here we critically assess the most sophisticated of such diffusion models - based on a time-dependent treatment of the atomic diffusion in a magnetized stellar atmosphere - by direct comparison with observations as well by testing the widely used surface mapping tools with the spectral line profiles predicted by this theory. We show that the mean abundances of Fe and Cr are grossly underestimated by the time-dependent theoretical diffusion model, with discrepancies reaching a factor of 1000 for Cr. We also demonstrate that Doppler imaging inversion codes, based either on modelling of individual metal lines or line-averaged profiles simulated according to theoretical three-dimensional abundance distribution, are able to reconstruct correct horizontal chemical spot maps despite ignoring the vertical abundance variation. These numerical experiments justify a direct comparison of the empirical two-dimensional Doppler maps with theoretical diffusion calculations. This comparison is generally unfavourable for the current diffusion theory, as very few chemical elements are observed to form overabundance rings in the horizontal field regions as predicted by the theory and there are numerous examples of element accumulations in the vicinity of radial field zones, which cannot be explained by diffusion calculations.

  1. History-dependent friction and slow slip from time-dependent microscopic junction laws studied in a statistical framework

    NASA Astrophysics Data System (ADS)

    Thøgersen, Kjetil; Trømborg, Jørgen Kjoshagen; Sveinsson, Henrik Andersen; Malthe-Sørenssen, Anders; Scheibert, Julien

    2014-05-01

    To study how macroscopic friction phenomena originate from microscopic junction laws, we introduce a general statistical framework describing the collective behavior of a large number of individual microjunctions forming a macroscopic frictional interface. Each microjunction can switch in time between two states: a pinned state characterized by a displacement-dependent force and a slipping state characterized by a time-dependent force. Instead of tracking each microjunction individually, the state of the interface is described by two coupled distributions for (i) the stretching of pinned junctions and (ii) the time spent in the slipping state. This framework allows for a whole family of microjunction behavior laws, and we show how it represents an overarching structure for many existing models found in the friction literature. We then use this framework to pinpoint the effects of the time scale that controls the duration of the slipping state. First, we show that the model reproduces a series of friction phenomena already observed experimentally. The macroscopic steady-state friction force is velocity dependent, either monotonic (strengthening or weakening) or nonmonotonic (weakening-strengthening), depending on the microscopic behavior of individual junctions. In addition, slow slip, which has been reported in a wide variety of systems, spontaneously occurs in the model if the friction contribution from junctions in the slipping state is time weakening. Next, we show that the model predicts a nontrivial history dependence of the macroscopic static friction force. In particular, the static friction coefficient at the onset of sliding is shown to increase with increasing deceleration during the final phases of the preceding sliding event. We suggest that this form of history dependence of static friction should be investigated in experiments, and we provide the acceleration range in which this effect is expected to be experimentally observable.

  2. History-dependent friction and slow slip from time-dependent microscopic junction laws studied in a statistical framework.

    PubMed

    Thøgersen, Kjetil; Trømborg, Jørgen Kjoshagen; Sveinsson, Henrik Andersen; Malthe-Sørenssen, Anders; Scheibert, Julien

    2014-05-01

    To study how macroscopic friction phenomena originate from microscopic junction laws, we introduce a general statistical framework describing the collective behavior of a large number of individual microjunctions forming a macroscopic frictional interface. Each microjunction can switch in time between two states: a pinned state characterized by a displacement-dependent force and a slipping state characterized by a time-dependent force. Instead of tracking each microjunction individually, the state of the interface is described by two coupled distributions for (i) the stretching of pinned junctions and (ii) the time spent in the slipping state. This framework allows for a whole family of microjunction behavior laws, and we show how it represents an overarching structure for many existing models found in the friction literature. We then use this framework to pinpoint the effects of the time scale that controls the duration of the slipping state. First, we show that the model reproduces a series of friction phenomena already observed experimentally. The macroscopic steady-state friction force is velocity dependent, either monotonic (strengthening or weakening) or nonmonotonic (weakening-strengthening), depending on the microscopic behavior of individual junctions. In addition, slow slip, which has been reported in a wide variety of systems, spontaneously occurs in the model if the friction contribution from junctions in the slipping state is time weakening. Next, we show that the model predicts a nontrivial history dependence of the macroscopic static friction force. In particular, the static friction coefficient at the onset of sliding is shown to increase with increasing deceleration during the final phases of the preceding sliding event. We suggest that this form of history dependence of static friction should be investigated in experiments, and we provide the acceleration range in which this effect is expected to be experimentally observable.

  3. Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement

    NASA Astrophysics Data System (ADS)

    Umansky, Moti; Weihs, Daphne

    2012-08-01

    In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program should also be backwards compatible. Symbolic Math Toolboxes (5.5) is required. The Curve Fitting Toolbox (3.0) is recommended. Computer: Tested on Windows only, yet should work on any computer running MATLAB. In Windows 7, should be used as administrator, if the user is not the administrator the program may not be able to save outputs and temporary outputs to all locations. Operating system: Any supporting MATLAB (MathWorks Inc.) v7.11 / 2010b or higher. Supplementary material: Sample output files (approx. 30 MBytes) are available. Classification: 12 External routines: Several MATLAB subfunctions (m-files), freely available on the web, were used as part of and included in, this code: count, NaN suite, parseArgs, roundsd, subaxis, wcov, wmean, and the executable pdfTK.exe. Nature of problem: In many physical and biophysical areas employing single-particle tracking, having the time-dependent power-laws governing the time-averaged meansquare displacement (MSD) of a single particle is crucial. Those power laws determine the mode-of-motion and hint at the underlying mechanisms driving motion. Accurate determination of the power laws that describe each trajectory will allow categorization into groups for further analysis of single trajectories or ensemble analysis, e.g. ensemble and time-averaged MSD. Solution method: The algorithm in the provided program automatically analyzes and fits time-dependent power laws to single particle trajectories, then group particles according to user defined cutoffs. It accepts time-dependent trajectories of several particles, each trajectory is run through the program, its time-averaged MSD is calculated, and power laws are determined in regions where the MSD is linear on a log-log scale. Our algorithm searches for high-curvature points in experimental data, here time-dependent MSD. Those serve as anchor points for determining the ranges of the power-law fits. Power-law scaling is then accurately determined and error estimations of the parameters and quality of fit are provided. After all single trajectory time-averaged MSDs are fit, we obtain cutoffs from the user to categorize and segment the power laws into groups; cutoff are either in exponents of the power laws, time of appearance of the fits, or both together. The trajectories are sorted according to the cutoffs and the time- and ensemble-averaged MSD of each group is provided, with histograms of the distributions of the exponents in each group. The program then allows the user to generate new trajectory files with trajectories segmented according to the determined groups, for any further required analysis. Additional comments: README file giving the names and a brief description of all the files that make-up the package and clear instructions on the installation and execution of the program is included in the distribution package. Running time: On an i5 Windows 7 machine with 4 GB RAM the automated parts of the run (excluding data loading and user input) take less than 45 minutes to analyze and save all stages for an 844 trajectory file, including optional PDF save. Trajectory length did not affect run time (tested up to 3600 frames/trajectory), which was on average 3.2±0.4 seconds per trajectory.

  4. Microbenthic distribution of Proterozoic tidal flats: environmental and taphonomic considerations

    NASA Technical Reports Server (NTRS)

    Kah, L. C.; Knoll, A. H.

    1996-01-01

    Silicified carbonates of the late Mesoproterozoic to early Neoproterozoic Society Cliffs Formation, Baffin Island, contain distinctive microfabrics and microbenthic assemblages whose paleo-environmental distribution within the formation parallels the distribution of these elements through Proterozoic time. In the Society Cliffs Formation, restricted carbonates--including microdigitate stromatolites, laminated tufa, and tufted microbial mats--consist predominantly of synsedimentary cements; these facies and the cyanobacterial fossils they contain are common in Paleoproterozoic successions but rare in Neoproterozoic and younger rocks. Less restricted tidal-flat facies in the formation are composed of laminated microbialites dominated by micritic carbonate lithified early, yet demonstrably after compaction; these strata contain cyanobacteria that are characteristic in Neoproterozoic rocks. Within the formation, the facies-dependent distribution of microbial populations reflects both the style and timing of carbonate deposition because of the strong substrate specificity of benthic cyanobacteria. A reasonable conclusion is that secular changes in microbenthic assemblages through Proterozoic time reflect a decrease in the overall representation of rapidly lithified carbonate substrates in younger peritidal environments, as well as concomitant changes in the taphonomic window of silicification through which early life is observed.

  5. Distribution in energies and acceleration times in DSA, and their effect on the cut-off

    NASA Astrophysics Data System (ADS)

    Brooks, A.; Protheroe, R. J.

    2001-08-01

    We have conducted Monte Carlo simulations of diffusive shock acceleration (DSA) to determine the distribution of times since injection taken to reach energy E > E0. This distribution of acceleration times for the case of momentum dependent diffusion is compared with that given by Drury and Forman (1983) based on extrapolation of the exact result (Toptygin 1980) for the case of the diffusion coefficient being independent of momentum. As a result of this distribution we find, as suggested by Drury et al. (1999), that Monte Carlo simulations result in smoother cut-offs and pile-ups in spectra of accelerated particles than expected from simple "box model" treatments of shock acceleration (e.g., Protheroe and Stanev 1999, Drury et al. 1999). This is particularly so for the case synchrotron pile-ups, which we find are replaced by a small bump at an energy about a factor of 2 below the expected cut-off, followed by a smooth cut-off with particles extending to energies well beyond the expected cut-off energy.

  6. Modeling and Studying the Effect of Texture and Elastic Anisotropy of Copper Microstructure in Nanoscale Interconnects on Reliability in Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Basavalingappa, Adarsh

    Copper interconnects are typically polycrystalline and follow a lognormal grain size distribution. Polycrystalline copper interconnect microstructures with a lognormal grain size distribution were obtained with a Voronoi tessellation approach. The interconnect structures thus obtained were used to study grain growth mechanisms, grain boundary scattering, scattering dependent resistance of interconnects, stress evolution, vacancy migration, reliability life times, impact of orientation dependent anisotropy on various mechanisms, etc. In this work, the microstructures were used to study the impact of microstructure and elastic anisotropy of copper on thermal and electromigration induced failure. A test structure with copper and bulk moduli values was modeled to do a comparative study with the test structures with textured microstructure and elastic anisotropy. By subjecting the modeled test structure to a thermal stress by ramping temperature down from 400 °C to 100 °C, a significant variation in normal stresses and pressure were observed at the grain boundaries. This variation in normal stresses and hydrostatic stresses at the grain boundaries was found to be dependent on the orientation, dimensions, surroundings, and location of the grains. This may introduce new weak points within the metal line where normal stresses can be very high depending on the orientation of the grains leading to delamination and accumulation sites for vacancies. Further, the hydrostatic stress gradients act as a driving force for vacancy migration. The normal stresses can exceed certain grain orientation dependent critical threshold values and induce delamination at the copper and cap material interface, thereby leading to void nucleation and growth. Modeled test structures were subjected to a series of copper depositions at 250 °C followed by copper etch at 25 °C to obtain initial stress conditions. Then the modeled test structures were subjected to 100,000 hours ( 11.4 years) of simulated thermal stress at an elevated temperature of 150 °C. Vacancy migration due to concentration gradients, thermal gradients, and mechanical stress gradients were considered under the applied thermal stress. As a result, relatively high concentrations of vacancies were observed in the test structure due to a driving force caused by the pressure gradients resulting from the elastic anisotropy of copper. The grain growth mechanism was not considered in these simulations. Studies with two grain analysis demonstrated that the stress gradients developed will be severe when (100) grains are adjacent to (111) grains, therefore making them the weak points for potentially reliability failures. Ilan Blech discovered that electromigration occurs above a critical product of the current density and metal length, commonly referred as Blech condition. Electromigration stress simulations in this work were carried out by subjecting test structures to scaled current densities to overcome the Blech condition of (jL)crit for small dimensions of test structure and the low temperature stress condition used. Vacancy migration under the electromigration stress conditions was considered along with the vacancy migration induced stress evolution. A simple void growth model was used which assumes voids start to form when vacancies reach a critical level. Increase of vacancies in a localized region increases the resistance of the metal line. Considering a 10% increase in resistance as a failure criterion, the distributions of failure times were obtained for given electromigration stress conditions. Bimodal/multimodal failure distributions were obtained as a result. The sigma values were slightly lower than the ones commonly observed from experiments. The anisotropy of the elastic moduli of copper leads to the development of significantly different stress values which are dependent on the orientation of the grains. This results in some grains having higher normal stress than the others. This grain orientation dependent normal stress can reach a critical stress necessary to induce delamination at the copper and cap interface. Time taken to reach critical stress was considered as time to fail and distributions of failure times were obtained for structures with different grain orientations in the microstructure for different critical stress values. The sigma values of the failure distributions thus obtained for different constant critical stress values had a strong dependence of on the critical stress. It is therefore critical to use the appropriate critical stress value for the delamination of copper and cap interface. The critical stress necessary to overcome the local adhesion of the copper and the cap material interface is dependent on grain orientation of the copper. Simulations were carried out by considering grain orientation dependent critical normal stress values as failure criteria. The sigma value thus obtained with selected critical stress values were comparable to sigma values commonly observed from experiments.

  7. Fraction number of trapped atoms and velocity distribution function in sub-recoil laser cooling scheme

    NASA Astrophysics Data System (ADS)

    Alekseev, V. A.; Krylova, D. D.

    1996-02-01

    The analytical investigation of Bloch equations is used to describe the main features of the 1D velocity selective coherent population trapping cooling scheme. For the initial stage of cooling the fraction of cooled atoms is derived in the case of a Gaussian initial velocity distribution. At very long times of interaction the fraction of cooled atoms and the velocity distribution function are described by simple analytical formulae and do not depend on the initial distribution. These results are in good agreement with those of Bardou, Bouchaud, Emile, Aspect and Cohen-Tannoudji based on statistical analysis in terms of Levy flights and with Monte-Carlo simulations of the process.

  8. Evidence for criticality in financial data

    NASA Astrophysics Data System (ADS)

    Ruiz, G.; de Marcos, A. F.

    2018-01-01

    We provide evidence that cumulative distributions of absolute normalized returns for the 100 American companies with the highest market capitalization, uncover a critical behavior for different time scales Δt. Such cumulative distributions, in accordance with a variety of complex - and financial - systems, can be modeled by the cumulative distribution functions of q-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These q-Gaussians are characterized by two parameters, namely ( q, β), that are uniquely defined by Δt. From these dependencies, we find a monotonic relationship between q and β, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.

  9. TIME-DEPENDENT TURBULENT HEATING OF OPEN FLUX TUBES IN THE CHROMOSPHERE, CORONA, AND SOLAR WIND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woolsey, L. N.; Cranmer, S. R., E-mail: lwoolsey@cfa.harvard.edu

    We investigate several key questions of plasma heating in open-field regions of the corona that connect to the solar wind. We present results for a model of Alfvén-wave-driven turbulence for three typical open magnetic field structures: a polar coronal hole, an open flux tube neighboring an equatorial streamer, and an open flux tube near a strong-field active region. We compare time-steady, one-dimensional turbulent heating models against fully time-dependent three-dimensional reduced-magnetohydrodynamic modeling of BRAID. We find that the time-steady results agree well with time-averaged results from BRAID. The time dependence allows us to investigate the variability of the magnetic fluctuations andmore » of the heating in the corona. The high-frequency tail of the power spectrum of fluctuations forms a power law whose exponent varies with height, and we discuss the possible physical explanation for this behavior. The variability in the heating rate is bursty and nanoflare-like in nature, and we analyze the amount of energy lost via dissipative heating in transient events throughout the simulation. The average energy in these events is 10{sup 21.91} erg, within the “picoflare” range, and many events reach classical “nanoflare” energies. We also estimated the multithermal distribution of temperatures that would result from the heating-rate variability, and found good agreement with observed widths of coronal differential emission measure distributions. The results of the modeling presented in this paper provide compelling evidence that turbulent heating in the solar atmosphere by Alfvén waves accelerates the solar wind in open flux tubes.« less

  10. Probabilistic seismic hazard in the San Francisco Bay area based on a simplified viscoelastic cycle model of fault interactions

    USGS Publications Warehouse

    Pollitz, F.F.; Schwartz, D.P.

    2008-01-01

    We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.

  11. Modeling the glass transition of amorphous networks for shape-memory behavior

    NASA Astrophysics Data System (ADS)

    Xiao, Rui; Choi, Jinwoo; Lakhera, Nishant; Yakacki, Christopher M.; Frick, Carl P.; Nguyen, Thao D.

    2013-07-01

    In this paper, a thermomechanical constitutive model was developed for the time-dependent behaviors of the glass transition of amorphous networks. The model used multiple discrete relaxation processes to describe the distribution of relaxation times for stress relaxation, structural relaxation, and stress-activated viscous flow. A non-equilibrium thermodynamic framework based on the fictive temperature was introduced to demonstrate the thermodynamic consistency of the constitutive theory. Experimental and theoretical methods were developed to determine the parameters describing the distribution of stress and structural relaxation times and the dependence of the relaxation times on temperature, structure, and driving stress. The model was applied to study the effects of deformation temperatures and physical aging on the shape-memory behavior of amorphous networks. The model was able to reproduce important features of the partially constrained recovery response observed in experiments. Specifically, the model demonstrated a strain-recovery overshoot for cases programmed below Tg and subjected to a constant mechanical load. This phenomenon was not observed for materials programmed above Tg. Physical aging, in which the material was annealed for an extended period of time below Tg, shifted the activation of strain recovery to higher temperatures and increased significantly the initial recovery rate. For fixed-strain recovery, the model showed a larger overshoot in the stress response for cases programmed below Tg, which was consistent with previous experimental observations. Altogether, this work demonstrates how an understanding of the time-dependent behaviors of the glass transition can be used to tailor the temperature and deformation history of the shape-memory programming process to achieve more complex shape recovery pathways, faster recovery responses, and larger activation stresses.

  12. Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data

    USGS Publications Warehouse

    Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.

    2015-01-01

    We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.

  13. Fame and obsolescence: Disentangling growth and aging dynamics of patent citations.

    PubMed

    Higham, K W; Governale, M; Jaffe, A B; Zülicke, U

    2017-04-01

    We present an analysis of citations accrued over time by patents granted by the United States Patent and Trademark Office in 1998. In contrast to previous studies, a disaggregation by technology category is performed, and exogenously caused citation-number growth is controlled for. Our approach reveals an intrinsic citation rate that clearly separates into an-in the long run, exponentially time-dependent-aging function and a completely time-independent preferential-attachment-type growth kernel. For the general case of such a separable citation rate, we obtain the time-dependent citation distribution analytically in a form that is valid for any functional form of its aging and growth parts. Good agreement between theory and long-time characteristics of patent-citation data establishes our work as a useful framework for addressing still open questions about knowledge-propagation dynamics, such as the observed excess of citations at short times.

  14. Evidence for history-dependence of influenza pandemic emergence

    NASA Astrophysics Data System (ADS)

    Hill, Edward M.; Tildesley, Michael J.; House, Thomas

    2017-03-01

    Influenza A viruses have caused a number of global pandemics, with considerable mortality in humans. Here, we analyse the time periods between influenza pandemics since 1700 under different assumptions to determine whether the emergence of new pandemic strains is a memoryless or history-dependent process. Bayesian model selection between exponential and gamma distributions for these time periods gives support to the hypothesis of history-dependence under eight out of nine sets of modelling assumptions. Using the fitted parameters to make predictions shows a high level of variability in the modelled number of pandemics from 2010-2110. The approach we take here relies on limited data, so is uncertain, but it provides cheap, safe and direct evidence relating to pandemic emergence, a field where indirect measurements are often made at great risk and cost.

  15. Testing the mean for dependent business data.

    PubMed

    Liang, Jiajuan; Martin, Linda

    2008-01-01

    In business data analysis, it is well known that the comparison of several means is usually carried out by the F-test in analysis of variance under the assumption of independently collected data from all populations. This assumption, however, is likely to be violated in survey data collected from various questionnaires or time-series data. As a result, it is not justifiable or problematic to apply the traditional F-test to comparison of dependent means directly. In this article, we develop a generalized F-test for comparing population means with dependent data. Simulation studies show that the proposed test has a simple approximate null distribution and feasible finite-sample properties. Applications of the proposed test in analysis of survey data and time-series data are illustrated by two real datasets.

  16. Software for rapid time dependent ChIP-sequencing analysis (TDCA).

    PubMed

    Myschyshyn, Mike; Farren-Dai, Marco; Chuang, Tien-Jui; Vocadlo, David

    2017-11-25

    Chromatin immunoprecipitation followed by DNA sequencing (ChIP-seq) and associated methods are widely used to define the genome wide distribution of chromatin associated proteins, post-translational epigenetic marks, and modifications found on DNA bases. An area of emerging interest is to study time dependent changes in the distribution of such proteins and marks by using serial ChIP-seq experiments performed in a time resolved manner. Despite such time resolved studies becoming increasingly common, software to facilitate analysis of such data in a robust automated manner is limited. We have designed software called Time-Dependent ChIP-Sequencing Analyser (TDCA), which is the first program to automate analysis of time-dependent ChIP-seq data by fitting to sigmoidal curves. We provide users with guidance for experimental design of TDCA for modeling of time course (TC) ChIP-seq data using two simulated data sets. Furthermore, we demonstrate that this fitting strategy is widely applicable by showing that automated analysis of three previously published TC data sets accurately recapitulates key findings reported in these studies. Using each of these data sets, we highlight how biologically relevant findings can be readily obtained by exploiting TDCA to yield intuitive parameters that describe behavior at either a single locus or sets of loci. TDCA enables customizable analysis of user input aligned DNA sequencing data, coupled with graphical outputs in the form of publication-ready figures that describe behavior at either individual loci or sets of loci sharing common traits defined by the user. TDCA accepts sequencing data as standard binary alignment map (BAM) files and loci of interest in browser extensible data (BED) file format. TDCA accurately models the number of sequencing reads, or coverage, at loci from TC ChIP-seq studies or conceptually related TC sequencing experiments. TC experiments are reduced to intuitive parametric values that facilitate biologically relevant data analysis, and the uncovering of variations in the time-dependent behavior of chromatin. TDCA automates the analysis of TC ChIP-seq experiments, permitting researchers to easily obtain raw and modeled data for specific loci or groups of loci with similar behavior while also enhancing consistency of data analysis of TC data within the genomics field.

  17. Size distribution and growth rate of crystal nuclei near critical undercooling in small volumes

    NASA Astrophysics Data System (ADS)

    Kožíšek, Z.; Demo, P.

    2017-11-01

    Kinetic equations are numerically solved within standard nucleation model to determine the size distribution of nuclei in small volumes near critical undercooling. Critical undercooling, when first nuclei are detected within the system, depends on the droplet volume. The size distribution of nuclei reaches the stationary value after some time delay and decreases with nucleus size. Only a certain maximum size of nuclei is reached in small volumes near critical undercooling. As a model system, we selected recently studied nucleation in Ni droplet [J. Bokeloh et al., Phys. Rev. Let. 107 (2011) 145701] due to available experimental and simulation data. However, using these data for sample masses from 23 μg up to 63 mg (corresponding to experiments) leads to the size distribution of nuclei, when no critical nuclei in Ni droplet are formed (the number of critical nuclei < 1). If one takes into account the size dependence of the interfacial energy, the size distribution of nuclei increases to reasonable values. In lower volumes (V ≤ 10-9 m3) nucleus size reaches some maximum extreme size, which quickly increases with undercooling. Supercritical clusters continue their growth only if the number of critical nuclei is sufficiently high.

  18. Signal optimization in urban transport: A totally asymmetric simple exclusion process with traffic lights.

    PubMed

    Arita, Chikashi; Foulaadvand, M Ebrahim; Santen, Ludger

    2017-03-01

    We consider the exclusion process on a ring with time-dependent defective bonds at which the hopping rate periodically switches between zero and one. This system models main roads in city traffics, intersecting with perpendicular streets. We explore basic properties of the system, in particular dependence of the vehicular flow on the parameters of signalization as well as the system size and the car density. We investigate various types of the spatial distribution of the vehicular density, and show existence of a shock profile. We also measure waiting time behind traffic lights, and examine its relationship with the traffic flow.

  19. Signal optimization in urban transport: A totally asymmetric simple exclusion process with traffic lights

    NASA Astrophysics Data System (ADS)

    Arita, Chikashi; Foulaadvand, M. Ebrahim; Santen, Ludger

    2017-03-01

    We consider the exclusion process on a ring with time-dependent defective bonds at which the hopping rate periodically switches between zero and one. This system models main roads in city traffics, intersecting with perpendicular streets. We explore basic properties of the system, in particular dependence of the vehicular flow on the parameters of signalization as well as the system size and the car density. We investigate various types of the spatial distribution of the vehicular density, and show existence of a shock profile. We also measure waiting time behind traffic lights, and examine its relationship with the traffic flow.

  20. Anomalous diffusion of poly(ethylene oxide) in agarose gels.

    PubMed

    Brenner, Tom; Matsukawa, Shingo

    2016-11-01

    We report on the effect of probe size and diffusion time of poly(ethylene) oxide in agarose gels. Time-dependence of the diffusion coefficient, reflecting anomalous diffusion, was observed for poly(ethylene) oxide chains with hydrodynamic radii exceeding about 20nm at an agarose concentration of 2%. The main conclusion is that the pore distribution includes pores that are only several nm across, in agreement with scattering reports in the literature. Interpretation of the diffusion coefficient dependence on the probe size based on a model of entangled rigid rods yielded a rod length of 72nm. Copyright © 2016. Published by Elsevier B.V.

Top