Sample records for time distribution function

  1. Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times

    NASA Astrophysics Data System (ADS)

    Fa, Kwok Sau

    2012-09-01

    In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.

  2. fixedTimeEvents: An R package for the distribution of distances between discrete events in fixed time

    NASA Astrophysics Data System (ADS)

    Liland, Kristian Hovde; Snipen, Lars

    When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.

  3. Parton distribution functions from reduced Ioffe-time distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jian-Hui; Chen, Jiunn-Wei; Monahan, Christopher

    2018-04-01

    We show that the correct way to extract parton distribution functions from the reduced Ioffe-time distribution, a ratio of the Ioffe-time distribution for a moving hadron and a hadron at rest, is through a factorization formula. This factorization exists because, at small distances, forming the ratio does not change the infrared behavior of the numerator, which is factorizable. We illustrate the effect of such a factorization by applying it to results in the literature.

  4. Real-time generation of the Wigner distribution of complex functions using phase conjugation in photorefractive materials.

    PubMed

    Sun, P C; Fainman, Y

    1990-09-01

    An optical processor for real-time generation of the Wigner distribution of complex amplitude functions is introduced. The phase conjugation of the input signal is accomplished by a highly efficient self-pumped phase conjugator based on a 45 degrees -cut barium titanate photorefractive crystal. Experimental results on the real-time generation of Wigner distribution slices for complex amplitude two-dimensional optical functions are presented and discussed.

  5. Function Allocation in a Robust Distributed Real-Time Environment

    DTIC Science & Technology

    1991-12-01

    fundamental characteristic of a distributed system is its ability to map individual logical functions of an application program onto many physical nodes... how much of a node’s processor time is scheduled for function processing. IMC is the function- to -function communication required to facilitate...indicator of how much excess processor time a node has. The reconfiguration algorithms use these variables to determine the most appropriate node(s) to

  6. Differential memory in the earth's magnetotail

    NASA Technical Reports Server (NTRS)

    Burkhart, G. R.; Chen, J.

    1991-01-01

    The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.

  7. Time-Frequency Distribution Analyses of Ku-Band Radar Doppler Echo Signals

    NASA Astrophysics Data System (ADS)

    Bujaković, Dimitrije; Andrić, Milenko; Bondžulić, Boban; Mitrović, Srđan; Simić, Slobodan

    2015-03-01

    Real radar echo signals of a pedestrian, vehicle and group of helicopters are analyzed in order to maximize signal energy around central Doppler frequency in time-frequency plane. An optimization, preserving this concentration, is suggested based on three well-known concentration measures. Various window functions and time-frequency distributions were optimization inputs. Conducted experiments on an analytic and three real signals have shown that energy concentration significantly depends on used time-frequency distribution and window function, for all three used criteria.

  8. Derivation of a Multiparameter Gamma Model for Analyzing the Residence-Time Distribution Function for Nonideal Flow Systems as an Alternative to the Advection-Dispersion Equation

    DOE PAGES

    Embry, Irucka; Roland, Victor; Agbaje, Oluropo; ...

    2013-01-01

    A new residence-time distribution (RTD) function has been developed and applied to quantitative dye studies as an alternative to the traditional advection-dispersion equation (AdDE). The new method is based on a jointly combined four-parameter gamma probability density function (PDF). The gamma residence-time distribution (RTD) function and its first and second moments are derived from the individual two-parameter gamma distributions of randomly distributed variables, tracer travel distance, and linear velocity, which are based on their relationship with time. The gamma RTD function was used on a steady-state, nonideal system modeled as a plug-flow reactor (PFR) in the laboratory to validate themore » effectiveness of the model. The normalized forms of the gamma RTD and the advection-dispersion equation RTD were compared with the normalized tracer RTD. The normalized gamma RTD had a lower mean-absolute deviation (MAD) (0.16) than the normalized form of the advection-dispersion equation (0.26) when compared to the normalized tracer RTD. The gamma RTD function is tied back to the actual physical site due to its randomly distributed variables. The results validate using the gamma RTD as a suitable alternative to the advection-dispersion equation for quantitative tracer studies of non-ideal flow systems.« less

  9. Lattice QCD exploration of parton pseudo-distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orginos, Kostas; Radyushkin, Anatoly; Karpie, Joseph

    Here, we demonstrate a new method of extracting parton distributions from lattice calculations. The starting idea is to treat the generic equal-time matrix elementmore » $${\\cal M} (Pz_3, z_3^2)$$ as a function of the Ioffe time $$\

  10. Lattice QCD exploration of parton pseudo-distribution functions

    DOE PAGES

    Orginos, Kostas; Radyushkin, Anatoly; Karpie, Joseph; ...

    2017-11-08

    Here, we demonstrate a new method of extracting parton distributions from lattice calculations. The starting idea is to treat the generic equal-time matrix elementmore » $${\\cal M} (Pz_3, z_3^2)$$ as a function of the Ioffe time $$\

  11. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  12. Unified halo-independent formalism from convex hulls for direct dark matter searches

    NASA Astrophysics Data System (ADS)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2017-12-01

    Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.

  13. Self spectrum window method in wigner-ville distribution.

    PubMed

    Liu, Zhongguo; Liu, Changchun; Liu, Boqiang; Lv, Yangsheng; Lei, Yinsheng; Yu, Mengsun

    2005-01-01

    Wigner-Ville distribution (WVD) is an important type of time-frequency analysis in biomedical signal processing. The cross-term interference in WVD has a disadvantageous influence on its application. In this research, the Self Spectrum Window (SSW) method was put forward to suppress the cross-term interference, based on the fact that the cross-term and auto-WVD- terms in integral kernel function are orthogonal. With the Self Spectrum Window (SSW) algorithm, a real auto-WVD function was used as a template to cross-correlate with the integral kernel function, and the Short Time Fourier Transform (STFT) spectrum of the signal was used as window function to process the WVD in time-frequency plane. The SSW method was confirmed by computer simulation with good analysis results. Satisfactory time- frequency distribution was obtained.

  14. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy.

    PubMed

    Shizgal, Bernie D

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].

  15. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].

  16. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  17. Probability Weighting Functions Derived from Hyperbolic Time Discounting: Psychophysical Models and Their Individual Level Testing.

    PubMed

    Takemura, Kazuhisa; Murakami, Hajime

    2016-01-01

    A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.

  18. The distribution of genome shared identical by descent for a pair of full sibs by means of the continuous time Markov chain

    NASA Astrophysics Data System (ADS)

    Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi

    2015-12-01

    This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.

  19. Studies of the Intrinsic Complexities of Magnetotail Ion Distributions: Theory and Observations

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, Maha

    1998-01-01

    This year we have studied the relationship between the structure seen in measured distribution functions and the detailed magnetospheric configuration. Results from our recent studies using time-dependent large-scale kinetic (LSK) calculations are used to infer the sources of the ions in the velocity distribution functions measured by a single spacecraft (Geotail). Our results strongly indicate that the different ion sources and acceleration mechanisms producing a measured distribution function can explain this structure. Moreover, individual structures within distribution functions were traced back to single sources. We also confirmed the fractal nature of ion distributions.

  20. EMD-WVD time-frequency distribution for analysis of multi-component signals

    NASA Astrophysics Data System (ADS)

    Chai, Yunzi; Zhang, Xudong

    2016-10-01

    Time-frequency distribution (TFD) is two-dimensional function that indicates the time-varying frequency content of one-dimensional signals. And The Wigner-Ville distribution (WVD) is an important and effective time-frequency analysis method. The WVD can efficiently show the characteristic of a mono-component signal. However, a major drawback is the extra cross-terms when multi-component signals are analyzed by WVD. In order to eliminating the cross-terms, we decompose signals into single frequency components - Intrinsic Mode Function (IMF) - by using the Empirical Mode decomposition (EMD) first, then use WVD to analyze each single IMF. In this paper, we define this new time-frequency distribution as EMD-WVD. And the experiment results show that the proposed time-frequency method can solve the cross-terms problem effectively and improve the accuracy of WVD time-frequency analysis.

  1. Relaxation of ferroelectric states in 2D distributions of quantum dots: EELS simulation

    NASA Astrophysics Data System (ADS)

    Cortés, C. M.; Meza-Montes, L.; Moctezuma, R. E.; Carrillo, J. L.

    2016-06-01

    The relaxation time of collective electronic states in a 2D distribution of quantum dots is investigated theoretically by simulating EELS experiments. From the numerical calculation of the probability of energy loss of an electron beam, traveling parallel to the distribution, it is possible to estimate the damping time of ferroelectric-like states. We generate this collective response of the distribution by introducing a mean field interaction among the quantum dots, and then, the model is extended incorporating effects of long-range correlations through a Bragg-Williams approximation. The behavior of the dielectric function, the energy loss function, and the relaxation time of ferroelectric-like states is then investigated as a function of the temperature of the distribution and the damping constant of the electronic states in the single quantum dots. The robustness of the trends and tendencies of our results indicate that this scheme of analysis can guide experimentalists to develop tailored quantum dots distributions for specific applications.

  2. Vacuum quantum stress tensor fluctuations: A diagonalization approach

    NASA Astrophysics Data System (ADS)

    Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.

    2018-01-01

    Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.

  3. Using New Theory and Experimental Methods to Understand the Relative Controls of Storage, Antecedent Conditions and Precipitation Intensity on Transit Time Distributions through a Sloping Soil Lysimeter

    NASA Astrophysics Data System (ADS)

    Kim, M.; Pangle, L. A.; Cardoso, C.; Lora, M.; Wang, Y.; Harman, C. J.; Troch, P. A. A.

    2014-12-01

    Transit time distributions (TTD) are an efficient way of characterizing transport through the complex flow dynamics of a hydrologic system, and can serve as a basis for spatially-integrated solute transport modeling. Recently there has been progress in the development of a theory of time-variable TTDs that captures the effect of temporal variability in the timing of fluxes as well as changes in flow pathways. Furthermore, a new formulation of this theory allows the essential transport properties of a system to be parameterized by a physically meaningful time-variable probability distribution, the Ω function. This distribution determines how the age distribution of water in storage is sampled by the outflow. The form of the Ω function varies if the flow pathways change, but is not determined by the timing of fluxes (unlike the TTD). In this study, we use this theory to characterize transport by transient flows through a homogeneously packed 1 m3 sloping soil lysimeter. The transit time distribution associated with each of four irrigation periods (repeated daily for 24 days) are compared to examine the significance of changes in the Ω function due to variations in total storage, antecedent conditions, and precipitation intensity. We observe both the time-variable TTD and the Ω function experimentally by applying the PERTH method (Harman and Kim, 2014, GRL, 41, 1567-1575). The method allows us to observe multiple overlapping time-variable TTD in controlled experiments using only two conservative tracers. We hypothesize that both the TTD and the Ω function will vary in time, even in this small scale, because water will take different flow pathways depending on the initial state of the lysimeter and irrigation intensity. However, based on primarily modeling, we conjecture that major variability in the Ω function will be limited to a period during and immediately after each irrigation. We anticipate the Ω function is almost time-invariant (or scales simply with total storage) during the recession period because flow pathways are stable during this period. This is one of the first experimental studies of this type, and the results offer insights into solute transport in transient, variably-saturated systems.

  4. Random walk to a nonergodic equilibrium concept

    NASA Astrophysics Data System (ADS)

    Bel, G.; Barkai, E.

    2006-01-01

    Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.

  5. The inclusion of capillary distribution in the adiabatic tissue homogeneity model of blood flow

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zeman, V.; Darko, J.; Lee, T.-Y.; Milosevic, M. F.; Haider, M.; Warde, P.; Yeung, I. W. T.

    2001-05-01

    We have developed a non-invasive imaging tracer kinetic model for blood flow which takes into account the distribution of capillaries in tissue. Each individual capillary is assumed to follow the adiabatic tissue homogeneity model. The main strength of our new model is in its ability to quantify the functional distribution of capillaries by the standard deviation in the time taken by blood to pass through the tissue. We have applied our model to the human prostate and have tested two different types of distribution functions. Both distribution functions yielded very similar predictions for the various model parameters, and in particular for the standard deviation in transit time. Our motivation for developing this model is the fact that the capillary distribution in cancerous tissue is drastically different from in normal tissue. We believe that there is great potential for our model to be used as a prognostic tool in cancer treatment. For example, an accurate knowledge of the distribution in transit times might result in an accurate estimate of the degree of tumour hypoxia, which is crucial to the success of radiation therapy.

  6. On the mass function of stars growing in a flocculent medium

    NASA Astrophysics Data System (ADS)

    Maschberger, Th.

    2013-12-01

    Stars form in regions of very inhomogeneous densities and may have chaotic orbital motions. This leads to a time variation of the accretion rate, which will spread the masses over some mass range. We investigate the mass distribution functions that arise from fluctuating accretion rates in non-linear accretion, ṁ ∝ mα. The distribution functions evolve in time and develop a power-law tail attached to a lognormal body, like in numerical simulations of star formation. Small fluctuations may be modelled by a Gaussian and develop a power-law tail ∝ m-α at the high-mass side for α > 1 and at the low-mass side for α < 1. Large fluctuations require that their distribution is strictly positive, for example, lognormal. For positive fluctuations the mass distribution function develops the power-law tail always at the high-mass hand side, independent of α larger or smaller than unity. Furthermore, we discuss Bondi-Hoyle accretion in a supersonically turbulent medium, the range of parameters for which non-linear stochastic growth could shape the stellar initial mass function, as well as the effects of a distribution of initial masses and growth times.

  7. Particle detection and non-detection in a quantum time of arrival measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sombillo, Denny Lane B., E-mail: dsombillo@nip.upd.edu.ph; Galapon, Eric A.

    2016-01-15

    The standard time-of-arrival distribution cannot reproduce both the temporal and the spatial profile of the modulus squared of the time-evolved wave function for an arbitrary initial state. In particular, the time-of-arrival distribution gives a non-vanishing probability even if the wave function is zero at a given point for all values of time. This poses a problem in the standard formulation of quantum mechanics where one quantizes a classical observable and uses its spectral resolution to calculate the corresponding distribution. In this work, we show that the modulus squared of the time-evolved wave function is in fact contained in one ofmore » the degenerate eigenfunctions of the quantized time-of-arrival operator. This generalizes our understanding of quantum arrival phenomenon where particle detection is not a necessary requirement, thereby providing a direct link between time-of-arrival quantization and the outcomes of the two-slit experiment. -- Highlights: •The time-evolved position density is contained in the standard TOA distribution. •Particle may quantum mechanically arrive at a given point without being detected. •The eigenstates of the standard TOA operator are linked to the two-slit experiment.« less

  8. Investigating the age distribution of fracture discharge using multiple environmental tracers, Bedrichov Tunnel, Czech Republic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, W. Payton; Hokr, Milan; Shao, Hua

    We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less

  9. Investigating the age distribution of fracture discharge using multiple environmental tracers, Bedrichov Tunnel, Czech Republic

    DOE PAGES

    Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...

    2016-10-19

    We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less

  10. Bridging Numerical and Analytical Models of Transient Travel Time Distributions: Challenges and Opportunities

    NASA Astrophysics Data System (ADS)

    Danesh Yazdi, M.; Klaus, J.; Condon, L. E.; Maxwell, R. M.

    2017-12-01

    Recent advancements in analytical solutions to quantify water and solute time-variant travel time distributions (TTDs) and the related StorAge Selection (SAS) functions synthesize catchment complexity into a simplified, lumped representation. While these analytical approaches are easy and efficient in application, they require high frequency hydrochemical data for parameter estimation. Alternatively, integrated hydrologic models coupled to Lagrangian particle-tracking approaches can directly simulate age under different catchment geometries and complexity at a greater computational expense. Here, we compare and contrast the two approaches by exploring the influence of the spatial distribution of subsurface heterogeneity, interactions between distinct flow domains, diversity of flow pathways, and recharge rate on the shape of TTDs and the relating SAS functions. To this end, we use a parallel three-dimensional variably saturated groundwater model, ParFlow, to solve for the velocity fields in the subsurface. A particle-tracking model, SLIM, is then implemented to determine the age distributions at every real time and domain location, facilitating a direct characterization of the SAS functions as opposed to analytical approaches requiring calibration of such functions. Steady-state results reveal that the assumption of random age sampling scheme might only hold in the saturated region of homogeneous catchments resulting in an exponential TTD. This assumption is however violated when the vadose zone is included as the underlying SAS function gives a higher preference to older ages. The dynamical variability of the true SAS functions is also shown to be largely masked by the smooth analytical SAS functions. As the variability of subsurface spatial heterogeneity increases, the shape of TTD approaches a power-law distribution function, including a broader distribution of shorter and longer travel times. We further found that larger (smaller) magnitude of effective precipitation shifts the scale of TTD towards younger (older) travel times, while the shape of the TTD remains untouched. This work constitutes a first step in linking a numerical transport model and analytical solutions of TTD to study their assumptions and limitations, providing physical inferences for empirical parameters.

  11. Log-normal distribution of the trace element data results from a mixture of stocahstic input and deterministic internal dynamics.

    PubMed

    Usuda, Kan; Kono, Koichi; Dote, Tomotaro; Shimizu, Hiroyasu; Tominaga, Mika; Koizumi, Chisato; Nakase, Emiko; Toshina, Yumi; Iwai, Junko; Kawasaki, Takashi; Akashi, Mitsuya

    2002-04-01

    In previous article, we showed a log-normal distribution of boron and lithium in human urine. This type of distribution is common in both biological and nonbiological applications. It can be observed when the effects of many independent variables are combined, each of which having any underlying distribution. Although elemental excretion depends on many variables, the one-compartment open model following a first-order process can be used to explain the elimination of elements. The rate of excretion is proportional to the amount present of any given element; that is, the same percentage of an existing element is eliminated per unit time, and the element concentration is represented by a deterministic negative power function of time in the elimination time-course. Sampling is of a stochastic nature, so the dataset of time variables in the elimination phase when the sample was obtained is expected to show Normal distribution. The time variable appears as an exponent of the power function, so a concentration histogram is that of an exponential transformation of Normally distributed time. This is the reason why the element concentration shows a log-normal distribution. The distribution is determined not by the element concentration itself, but by the time variable that defines the pharmacokinetic equation.

  12. Exact infinite-time statistics of the Loschmidt echo for a quantum quench.

    PubMed

    Campos Venuti, Lorenzo; Jacobson, N Tobias; Santra, Siddhartha; Zanardi, Paolo

    2011-07-01

    The equilibration dynamics of a closed quantum system is encoded in the long-time distribution function of generic observables. In this Letter we consider the Loschmidt echo generalized to finite temperature, and show that we can obtain an exact expression for its long-time distribution for a closed system described by a quantum XY chain following a sudden quench. In the thermodynamic limit the logarithm of the Loschmidt echo becomes normally distributed, whereas for small quenches in the opposite, quasicritical regime, the distribution function acquires a universal double-peaked form indicating poor equilibration. These findings, obtained by a central limit theorem-type result, extend to completely general models in the small-quench regime.

  13. Serial and Parallel Attentive Visual Searches: Evidence from Cumulative Distribution Functions of Response Times

    ERIC Educational Resources Information Center

    Sung, Kyongje

    2008-01-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…

  14. 3D ion velocity distribution function measurement in an electric thruster using laser induced fluorescence tomography

    NASA Astrophysics Data System (ADS)

    Elias, P. Q.; Jarrige, J.; Cucchetti, E.; Cannat, F.; Packan, D.

    2017-09-01

    Measuring the full ion velocity distribution function (IVDF) by non-intrusive techniques can improve our understanding of the ionization processes and beam dynamics at work in electric thrusters. In this paper, a Laser-Induced Fluorescence (LIF) tomographic reconstruction technique is applied to the measurement of the IVDF in the plume of a miniature Hall effect thruster. A setup is developed to move the laser axis along two rotation axes around the measurement volume. The fluorescence spectra taken from different viewing angles are combined using a tomographic reconstruction algorithm to build the complete 3D (in phase space) time-averaged distribution function. For the first time, this technique is used in the plume of a miniature Hall effect thruster to measure the full distribution function of the xenon ions. Two examples of reconstructions are provided, in front of the thruster nose-cone and in front of the anode channel. The reconstruction reveals the features of the ion beam, in particular on the thruster axis where a toroidal distribution function is observed. These findings are consistent with the thruster shape and operation. This technique, which can be used with other LIF schemes, could be helpful in revealing the details of the ion production regions and the beam dynamics. Using a more powerful laser source, the current implementation of the technique could be improved to reduce the measurement time and also to reconstruct the temporal evolution of the distribution function.

  15. Fixed and Data Adaptive Kernels in Cohen’s Class of Time-Frequency Distributions

    DTIC Science & Technology

    1992-09-01

    translated into its associated analytic signal by using the techniques discussed in Chapter Four. 1. Wigner - Ville Distribution function PS = wvd (data,winlen...step,begin,theend) % PS = wvd (data,winlen,step,begin,theend) % ’wvd.ml returns the Wigner - Ville time-frequency distribution % for the input data...12 IV. FIXED KERNEL DISTRIBUTIONS .................................................................. 19 A. WIGNER - VILLE DISTRIBUTION

  16. Time-dependent vibrational spectral analysis of first principles trajectory of methylamine with wavelet transform.

    PubMed

    Biswas, Sohag; Mallik, Bhabani S

    2017-04-12

    The fluctuation dynamics of amine stretching frequencies, hydrogen bonds, dangling N-D bonds, and the orientation profile of the amine group of methylamine (MA) were investigated under ambient conditions by means of dispersion-corrected density functional theory-based first principles molecular dynamics (FPMD) simulations. Along with the dynamical properties, various equilibrium properties such as radial distribution function, spatial distribution function, combined radial and angular distribution functions and hydrogen bonding were also calculated. The instantaneous stretching frequencies of amine groups were obtained by wavelet transform of the trajectory obtained from FPMD simulations. The frequency-structure correlation reveals that the amine stretching frequency is weakly correlated with the nearest nitrogen-deuterium distance. The frequency-frequency correlation function has a short time scale of around 110 fs and a longer time scale of about 1.15 ps. It was found that the short time scale originates from the underdamped motion of intact hydrogen bonds of MA pairs. However, the long time scale of the vibrational spectral diffusion of N-D modes is determined by the overall dynamics of hydrogen bonds as well as the dangling ND groups and the inertial rotation of the amine group of the molecule.

  17. Ultra-Wideband Radar Transient Detection using Time-Frequency and Wavelet Transforms.

    DTIC Science & Technology

    1992-12-01

    if p==2, mesh(flipud(abs(spdatamatrix).A2)) end 2. Wigner - Ville Distribution function P = wvd (data,winlenstep,begintheendp) % Filename: wvd.m % Title...short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution , and time-scale methods, such as the a trous...such as the short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution [1], and time-scale methods, such

  18. Implications of Atmospheric Test Fallout Data for Nuclear Winter.

    NASA Astrophysics Data System (ADS)

    Baker, George Harold, III

    1987-09-01

    Atmospheric test fallout data have been used to determine admissable dust particle size distributions for nuclear winter studies. The research was originally motivated by extreme differences noted in the magnitude and longevity of dust effects predicted by particle size distributions routinely used in fallout predictions versus those used for nuclear winter studies. Three different sets of historical data have been analyzed: (1) Stratospheric burden of Strontium -90 and Tungsten-185, 1954-1967 (92 contributing events); (2) Continental U.S. Strontium-90 fallout through 1958 (75 contributing events); (3) Local Fallout from selected Nevada tests (16 events). The contribution of dust to possible long term climate effects following a nuclear exchange depends strongly on the particle size distribution. The distribution affects both the atmospheric residence time and optical depth. One dimensional models of stratospheric/tropospheric fallout removal were developed and used to identify optimum particle distributions. Results indicate that particle distributions which properly predict bulk stratospheric activity transfer tend to be somewhat smaller than number size distributions used in initial nuclear winter studies. In addition, both ^{90}Sr and ^ {185}W fallout behavior is better predicted by the lognormal distribution function than the prevalent power law hybrid function. It is shown that the power law behavior of particle samples may well be an aberration of gravitational cloud stratification. Results support the possible existence of two independent particle size distributions in clouds generated by surface or near surface bursts. One distribution governs late time stratospheric fallout, the other governs early time fallout. A bimodal lognormal distribution is proposed to describe the cloud particle population. The distribution predicts higher initial sunlight attenuation and lower late time attenuation than the power law hybrid function used in initial nuclear winter studies.

  19. Statistical characteristics of surrogate data based on geophysical measurements

    NASA Astrophysics Data System (ADS)

    Venema, V.; Bachner, S.; Rust, H. W.; Simmer, C.

    2006-09-01

    In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.

  20. Time evolution of a Gaussian class of quasi-distribution functions under quadratic Hamiltonian.

    PubMed

    Ginzburg, D; Mann, A

    2014-03-10

    A Lie algebraic method for propagation of the Wigner quasi-distribution function (QDF) under quadratic Hamiltonian was presented by Zoubi and Ben-Aryeh. We show that the same method can be used in order to propagate a rather general class of QDFs, which we call the "Gaussian class." This class contains as special cases the well-known Wigner, Husimi, Glauber, and Kirkwood-Rihaczek QDFs. We present some examples of the calculation of the time evolution of those functions.

  1. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    PubMed

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Equilibration in the time-dependent Hartree-Fock approach probed with the Wigner distribution function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loebl, N.; Maruhn, J. A.; Reinhard, P.-G.

    2011-09-15

    By calculating the Wigner distribution function in the reaction plane, we are able to probe the phase-space behavior in the time-dependent Hartree-Fock scheme during a heavy-ion collision in a consistent framework. Various expectation values of operators are calculated by evaluating the corresponding integrals over the Wigner function. In this approach, it is straightforward to define and analyze quantities even locally. We compare the Wigner distribution function with the smoothed Husimi distribution function. Different reaction scenarios are presented by analyzing central and noncentral {sup 16}O +{sup 16}O and {sup 96}Zr +{sup 132}Sn collisions. Although we observe strong dissipation in the timemore » evolution of global observables, there is no evidence for complete equilibration in the local analysis of the Wigner function. Because the initial phase-space volumes of the fragments barely merge and mean values of the observables are conserved in fusion reactions over thousands of fm/c, we conclude that the time-dependent Hartree-Fock method provides a good description of the early stage of a heavy-ion collision but does not provide a mechanism to change the phase-space structure in a dramatic way necessary to obtain complete equilibration.« less

  3. Unstable density distribution associated with equatorial plasma bubble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.

    2016-04-15

    In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less

  4. Ramsey Interference in One-Dimensional Systems: The Full Distribution Function of Fringe Contrast as a Probe of Many-Body Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitagawa, Takuya; Pielawa, Susanne; Demler, Eugene

    2010-06-25

    We theoretically analyze Ramsey interference experiments in one-dimensional quasicondensates and obtain explicit expressions for the time evolution of full distribution functions of fringe contrast. We show that distribution functions contain unique signatures of the many-body mechanism of decoherence. We argue that Ramsey interference experiments provide a powerful tool for analyzing strongly correlated nature of 1D interacting systems.

  5. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  6. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE PAGES

    Radyushkin, Anatoly V.

    2017-08-28

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  7. The perturbed Sparre Andersen model with a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Gao, Heli; Yin, Chuancun

    2008-10-01

    In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.

  8. Reconfiguration in Robust Distributed Real-Time Systems Based on Global Checkpoints

    DTIC Science & Technology

    1991-12-01

    achieved by utilizing distributed systems in which a single application program executes on multiple processors, connected to a network. The distributed...single application program executes on multiple proces- sors, connected to a network. The distributed nature of such systems make it possible to ...resident at every node. How - ever, the responsibility for execution of a particular function is assigned to only one node in this framework. This function

  9. Evaluating the assumption of power-law late time scaling of breakthrough curves in highly heterogeneous media

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele

    2017-04-01

    Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.

  10. Interpretation of environmental tracers in groundwater systems with stagnant water zones.

    PubMed

    Maloszewski, Piotr; Stichler, Willibald; Zuber, Andrzej

    2004-03-01

    Lumped-parameter models are commonly applied for determining the age of water from time records of transient environmental tracers. The simplest models (e.g. piston flow or exponential) are also applicable for dating based on the decay or accumulation of tracers in groundwater systems. The models are based on the assumption that the transit time distribution function (exit age distribution function) of the tracer particles in the investigated system adequately represents the distribution of flow lines and is described by a simple function. A chosen or fitted function (called the response function) describes the transit time distribution of a tracer which would be observed at the output (discharge area, spring, stream, or pumping wells) in the case of an instantaneous injection at the entrance (recharge area). Due to large space and time scales, response functions are not measurable in groundwater systems, therefore, functions known from other fields of science, mainly from chemical engineering, are usually used. The type of response function and the values of its parameters define the lumped-parameter model of a system. The main parameter is the mean transit time of tracer through the system, which under favourable conditions may represent the mean age of mobile water. The parameters of the model are found by fitting calculated concentrations to the experimental records of concentrations measured at the outlet. The mean transit time of tracer (often called the tracer age), whether equal to the mean age of water or not, serves in adequate combinations with other data for determining other useful parameters, e.g. the recharge rate or the content of water in the system. The transit time distribution and its mean value serve for confirmation or determination of the conceptual model of the system and/or estimation of its potential vulnerability to anthropogenic pollution. In the interpretation of environmental tracer data with the aid of the lumped-parameter models, the influence of diffusion exchange between mobile water and stagnant or quasi-stagnant water is seldom considered, though it leads to large differences between tracer and water ages. Therefore, the article is focused on the transit time distribution functions of the most common lumped-parameter models, particularly those applicable for the interpretation of environmental tracer data in double-porosity aquifers, or aquifers in which aquitard diffusion may play an important role. A case study is recalled for a confined aquifer in which the diffusion exchange with aquitard most probably strongly influenced the transport of environmental tracers. Another case study presented is related to the interpretation of environmental tracer data obtained from lysimeters installed in the unsaturated zone with a fraction of stagnant water.

  11. Behavior of Triple Langmuir Probes in Non-Equilibrium Plasmas

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Ratcliffe, Alicia C.

    2018-01-01

    The triple Langmuir probe is an electrostatic probe in which three probe tips collect current when inserted into a plasma. The triple probe differs from a simple single Langmuir probe in the nature of the voltage applied to the probe tips. In the single probe, a swept voltage is applied to the probe tip to acquire a waveform showing the collected current as a function of applied voltage (I-V curve). In a triple probe three probe tips are electrically coupled to each other with constant voltages applied between each of the tips. The voltages are selected such that they would represent three points on the single Langmuir probe I-V curve. Elimination of the voltage sweep makes it possible to measure time-varying plasma properties in transient plasmas. Under the assumption of a Maxwellian plasma, one can determine the time-varying plasma temperature T(sub e)(t) and number density n(sub e)(t) from the applied voltage levels and the time-histories of the collected currents. In the present paper we examine the theory of triple probe operation, specifically focusing on the assumption of a Maxwellian plasma. Triple probe measurements have been widely employed for a number of pulsed and timevarying plasmas, including pulsed plasma thrusters (PPTs), dense plasma focus devices, plasma flows, and fusion experiments. While the equilibrium assumption may be justified for some applications, it is unlikely that it is fully justifiable for all pulsed and time-varying plasmas or for all times during the pulse of a plasma device. To examine a simple non-equilibrium plasma case, we return to basic governing equations of probe current collection and compute the current to the probes for a distribution function consisting of two Maxwellian distributions with different temperatures (the two-temperature Maxwellian). A variation of this method is also employed, where one of the Maxwellians is offset from zero (in velocity space) to add a suprathermal beam of electrons to the tail of the main Maxwellian distribution (the bump-on-the-tail distribution function). For a range of parameters in these non-Maxwellian distributions, we compute the current collection to the probes. We compare the distribution function that was assumed a priori with the distribution function one would infer when applying standard triple probe theory to analyze the collected currents. For the assumed class of non-Maxwellian distribution functions this serves to illustrate the effect a non-Maxwellian plasma would have on results interpreted using the equilibrium triple probe current collection theory, allowing us to state the magnitudes of these deviations as a function of the assumed distribution function properties.

  12. Cole-Davidson dynamics of simple chain models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian

    2008-10-01

    Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less

  13. Extracting the time scales of conformational dynamics from single-molecule single-photon fluorescence statistics.

    PubMed

    Shang, Jianyuan; Geva, Eitan

    2007-04-26

    The quenching rate of a fluorophore attached to a macromolecule can be rather sensitive to its conformational state. The decay of the corresponding fluorescence lifetime autocorrelation function can therefore provide unique information on the time scales of conformational dynamics. The conventional way of measuring the fluorescence lifetime autocorrelation function involves evaluating it from the distribution of delay times between photoexcitation and photon emission. However, the time resolution of this procedure is limited by the time window required for collecting enough photons in order to establish this distribution with sufficient signal-to-noise ratio. Yang and Xie have recently proposed an approach for improving the time resolution, which is based on the argument that the autocorrelation function of the delay time between photoexcitation and photon emission is proportional to the autocorrelation function of the square of the fluorescence lifetime [Yang, H.; Xie, X. S. J. Chem. Phys. 2002, 117, 10965]. In this paper, we show that the delay-time autocorrelation function is equal to the autocorrelation function of the square of the fluorescence lifetime divided by the autocorrelation function of the fluorescence lifetime. We examine the conditions under which the delay-time autocorrelation function is approximately proportional to the autocorrelation function of the square of the fluorescence lifetime. We also investigate the correlation between the decay of the delay-time autocorrelation function and the time scales of conformational dynamics. The results are demonstrated via applications to a two-state model and an off-lattice model of a polypeptide.

  14. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  15. The Application Research of Modern Intelligent Cold Chain Distribution System Based on Internet of Things Technology

    NASA Astrophysics Data System (ADS)

    Fan, Dehui; Gao, Shan

    This paper implemented an intelligent cold chain distribution system based on the technology of Internet of things, and took the protoplasmic beer logistics transport system as example. It realized the remote real-time monitoring material status, recorded the distribution information, dynamically adjusted the distribution tasks and other functions. At the same time, the system combined the Internet of things technology with weighted filtering algorithm, realized the real-time query of condition curve, emergency alarming, distribution data retrieval, intelligent distribution task arrangement, etc. According to the actual test, it can realize the optimization of inventory structure, and improve the efficiency of cold chain distribution.

  16. Spatiotemporal reconstruction of list-mode PET data.

    PubMed

    Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M

    2002-04-01

    We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.

  17. Spectral properties of four-time fermionic Green's functions

    DOE PAGES

    Shvaika, A. M.

    2016-09-01

    The spectral relations for the four-time fermionic Green's functions are derived in the most general case. The terms which correspond to the zero-frequency anomalies, known before only for the bosonic Green's functions, are separated and their connection with the second cumulants of the Boltzmann distribution function is elucidated. Furthermore, the high-frequency expansions of the four-time fermionic Green's functions are provided for different directions in the frequency space.

  18. Spectral properties of four-time fermionic Green's functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shvaika, A. M.

    The spectral relations for the four-time fermionic Green's functions are derived in the most general case. The terms which correspond to the zero-frequency anomalies, known before only for the bosonic Green's functions, are separated and their connection with the second cumulants of the Boltzmann distribution function is elucidated. Furthermore, the high-frequency expansions of the four-time fermionic Green's functions are provided for different directions in the frequency space.

  19. A Model Based on Environmental Factors for Diameter Distribution in Black Wattle in Brazil

    PubMed Central

    Sanquetta, Carlos Roberto; Behling, Alexandre; Dalla Corte, Ana Paula; Péllico Netto, Sylvio; Rodrigues, Aurelio Lourenço; Simon, Augusto Arlindo

    2014-01-01

    This article discusses the dynamics of a diameter distribution in stands of black wattle throughout its growth cycle using the Weibull probability density function. Moreover, the parameters of this distribution were related to environmental variables from meteorological data and surface soil horizon with the aim of finding a model for diameter distribution which their coefficients were related to the environmental variables. We found that the diameter distribution of the stand changes only slightly over time and that the estimators of the Weibull function are correlated with various environmental variables, with accumulated rainfall foremost among them. Thus, a model was obtained in which the estimators of the Weibull function are dependent on rainfall. Such a function can have important applications, such as in simulating growth potential in regions where historical growth data is lacking, as well as the behavior of the stand under different environmental conditions. The model can also be used to project growth in diameter, based on the rainfall affecting the forest over a certain time period. PMID:24932909

  20. Occupation times and ergodicity breaking in biased continuous time random walks

    NASA Astrophysics Data System (ADS)

    Bel, Golan; Barkai, Eli

    2005-12-01

    Continuous time random walk (CTRW) models are widely used to model diffusion in condensed matter. There are two classes of such models, distinguished by the convergence or divergence of the mean waiting time. Systems with finite average sojourn time are ergodic and thus Boltzmann-Gibbs statistics can be applied. We investigate the statistical properties of CTRW models with infinite average sojourn time; in particular, the occupation time probability density function is obtained. It is shown that in the non-ergodic phase the distribution of the occupation time of the particle on a given lattice point exhibits bimodal U or trimodal W shape, related to the arcsine law. The key points are as follows. (a) In a CTRW with finite or infinite mean waiting time, the distribution of the number of visits on a lattice point is determined by the probability that a member of an ensemble of particles in equilibrium occupies the lattice point. (b) The asymmetry parameter of the probability distribution function of occupation times is related to the Boltzmann probability and to the partition function. (c) The ensemble average is given by Boltzmann-Gibbs statistics for either finite or infinite mean sojourn time, when detailed balance conditions hold. (d) A non-ergodic generalization of the Boltzmann-Gibbs statistical mechanics for systems with infinite mean sojourn time is found.

  1. Generalized time evolution of the homogeneous cooling state of a granular gas with positive and negative coefficient of normal restitution

    NASA Astrophysics Data System (ADS)

    Khalil, Nagi

    2018-04-01

    The homogeneous cooling state (HCS) of a granular gas described by the inelastic Boltzmann equation is reconsidered. As usual, particles are taken as inelastic hard disks or spheres, but now the coefficient of normal restitution α is allowed to take negative values , which is a simple way of modeling more complicated inelastic interactions. The distribution function of the HCS is studied at the long-time limit, as well as intermediate times. At the long-time limit, the relevant information of the HCS is given by a scaling distribution function , where the time dependence occurs through a dimensionless velocity c. For , remains close to the Gaussian distribution in the thermal region, its cumulants and exponential tails being well described by the first Sonine approximation. In contrast, for , the distribution function becomes multimodal, its maxima located at , and its observable tails algebraic. The latter is a consequence of an unbalanced relaxation–dissipation competition, and is analytically demonstrated for , thanks to a reduction of the Boltzmann equation to a Fokker–Plank-like equation. Finally, a generalized scaling solution to the Boltzmann equation is also found . Apart from the time dependence occurring through the dimensionless velocity, depends on time through a new parameter β measuring the departure of the HCS from its long-time limit. It is shown that describes the time evolution of the HCS for almost all times. The relevance of the new scaling is also discussed.

  2. Electron-beam-charged dielectrics: Internal charge distribution

    NASA Technical Reports Server (NTRS)

    Beers, B. L.; Pine, V. W.

    1981-01-01

    Theoretical calculations of an electron transport model of the charging of dielectrics due to electron bombardment are compared to measurements of internal charge distributions. The emphasis is on the distribution of Teflon. The position of the charge centroid as a function of time is not monotonic. It first moves deeper into the material and then moves back near to the surface. In most time regimes of interest, the charge distribution is not unimodal, but instead has two peaks. The location of the centroid near saturation is a function of the incident current density. While the qualitative comparison of theory and experiment are reasonable, quantitative comparison shows discrepancies of as much as a factor of two.

  3. SimBOX: a scalable architecture for aggregate distributed command and control of spaceport and service constellation

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-08-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  4. SimBox: a simulation-based scalable architecture for distributed command and control of spaceport and service constellations

    NASA Astrophysics Data System (ADS)

    Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj

    2004-09-01

    In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.

  5. Relative controls of external and internal variability on time-variable transit time distributions, and the importance of StorAge Selection function approaches

    NASA Astrophysics Data System (ADS)

    Kim, M.; Pangle, L. A.; Cardoso, C.; Lora, M.; Meira, A.; Volkmann, T. H. M.; Wang, Y.; Harman, C. J.; Troch, P. A. A.

    2015-12-01

    Transit time distributions (TTDs) are an efficient way of characterizing complex transport dynamics of a hydrologic system. Time-invariant TTD has been studied extensively, but TTDs are time-varying under unsteady hydrologic systems due to both external variability (e.g., time-variability in fluxes), and internal variability (e.g., time-varying flow pathways). The use of "flow-weighted time" has been suggested to account for the effect of external variability on TTDs, but neglects the role of internal variability. Recently, to account both types of variability, StorAge Selection (SAS) function approaches were developed. One of these approaches enables the transport characteristics of a system - how the different aged water in the storage is sampled by the outflow - to be parameterized by time-variable probability distribution called the rank SAS (rSAS) function, and uses it directly to determine the time-variable TTDs resulting from a given timeseries of fluxes in and out of a system. Unlike TTDs, the form of the rSAS function varies only due to changes in flow pathways, but is not affected by the timing of fluxes alone. However, the relation between physical mechanisms and the time-varying rSAS functions are not well understood. In this study, relative effects of internal and external variability on the TTDs are examined using observations from a homogeneously packed 1 m3 sloping soil lysimeter. The observations suggest the importance of internal variability on TTDs, and reinforce the need to account for this variability using time-variable rSAS functions. Furthermore, the relative usefulness of two other formulations of SAS functions and the mortality rate (which plays a similar role to SAS functions in the McKendrick-von Foerster model of age-structured population dynamics) are also discussed. Finally, numerical modeling is used to explore the role of internal and external variability for hydrologic systems with diverse geomorphic and climate characteristics. This works will give an insight that which approach (or SAS function) is preferable under different conditions.

  6. Queues with Dropping Functions and General Arrival Processes

    PubMed Central

    Chydzinski, Andrzej; Mrozowski, Pawel

    2016-01-01

    In a queueing system with the dropping function the arriving customer can be denied service (dropped) with the probability that is a function of the queue length at the time of arrival of this customer. The potential applicability of such mechanism is very wide due to the fact that by choosing the shape of this function one can easily manipulate several performance characteristics of the queueing system. In this paper we carry out analysis of the queueing system with the dropping function and a very general model of arrival process—the model which includes batch arrivals and the interarrival time autocorrelation, and allows for fitting the actual shape of the interarrival time distribution and its moments. For such a system we obtain formulas for the distribution of the queue length and the overall customer loss ratio. The analytical results are accompanied with numerical examples computed for several dropping functions. PMID:26943171

  7. A Study of Transport in the Near-Earth Plasma Sheet During A Substorm Using Time-Dependent Large Scale Kinetics

    NASA Technical Reports Server (NTRS)

    El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Frank, L. A.; Paterson, W. R.

    1998-01-01

    In this study we investigate the transport of H+ ions that made up the complex ion distribution function observed by the Geotail spacecraft at 0740 UT on November 24, 1996. This ion distribution function, observed by Geotail at approximately 20 R(sub E) downtail, was used to initialize a time-dependent large-scale kinetic (LSK) calculation of the trajectories of 75,000 ions forward in time. Time-dependent magnetic and electric fields were obtained from a global magnetohydrodynamic (MHD) simulation of the magnetosphere and its interaction with the solar wind and the interplanetary magnetic field (IMF) as observed during the interval of the observation of the distribution function. Our calculations indicate that the particles observed by Geotail were scattered across the equatorial plane by the multiple interactions with the current sheet and then convected sunward. They were energized by the dawn-dusk electric field during their transport from Geotail location and ultimately were lost at the ionospheric boundary or into the magnetopause.

  8. A reexamination of plasma measurements from the Mariner 5 Venus encounter

    NASA Technical Reports Server (NTRS)

    Shefer, R. E.; Lazarus, A. J.; Bridge, H. S.

    1979-01-01

    Mariner 5 plasma data from the Venus encounter have been analyzed with twice the time resolution of the original analysis of Bridge et al. (1967). The velocity distribution function for each spectrum is used to determine more precisely the locations of boundaries and characteristic flow parameters in the interaction region around the planet. A new region is identified in the flow located between magnetosheathlike plasma inside the shock front and an interior low-flux region near the geometrical shadow of the planet. The region is characterized by a wide velocity distribution function and a decrease in ion flux. Using the highest time resolution magnetic field data, it is proposed that rapid magnetic field fluctuations in this region may result in an artificial broadening of the distribution function. It is concluded that very high time resolution is required in future experiments in order to determine the true nature of the plasma in this region.

  9. Method and device for landing aircraft dependent on runway occupancy time

    NASA Technical Reports Server (NTRS)

    Ghalebsaz Jeddi, Babak (Inventor)

    2012-01-01

    A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.

  10. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  11. Voltage stress effects on microcircuit accelerated life test failure rates

    NASA Technical Reports Server (NTRS)

    Johnson, G. M.

    1976-01-01

    The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.

  12. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  13. An estimation of distribution method for infrared target detection based on Copulas

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Zhang, Yiqun

    2015-10-01

    Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.

  14. Nonparametric Bayesian inference for mean residual life functions in survival analysis.

    PubMed

    Poynor, Valerie; Kottas, Athanasios

    2018-01-19

    Modeling and inference for survival analysis problems typically revolves around different functions related to the survival distribution. Here, we focus on the mean residual life (MRL) function, which provides the expected remaining lifetime given that a subject has survived (i.e. is event-free) up to a particular time. This function is of direct interest in reliability, medical, and actuarial fields. In addition to its practical interpretation, the MRL function characterizes the survival distribution. We develop general Bayesian nonparametric inference for MRL functions built from a Dirichlet process mixture model for the associated survival distribution. The resulting model for the MRL function admits a representation as a mixture of the kernel MRL functions with time-dependent mixture weights. This model structure allows for a wide range of shapes for the MRL function. Particular emphasis is placed on the selection of the mixture kernel, taken to be a gamma distribution, to obtain desirable properties for the MRL function arising from the mixture model. The inference method is illustrated with a data set of two experimental groups and a data set involving right censoring. The supplementary material available at Biostatistics online provides further results on empirical performance of the model, using simulated data examples. © The Author 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Empirical estimation of a distribution function with truncated and doubly interval-censored data and its application to AIDS studies.

    PubMed

    Sun, J

    1995-09-01

    In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.

  16. Wigner time-delay distribution in chaotic cavities and freezing transition.

    PubMed

    Texier, Christophe; Majumdar, Satya N

    2013-06-21

    Using the joint distribution for proper time delays of a chaotic cavity derived by Brouwer, Frahm, and Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of the large number of channels N, the large deviation function for the distribution of the Wigner time delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas.

  17. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  18. The heterogeneity of segmental dynamics of filled EPDM by (1)H transverse relaxation NMR.

    PubMed

    Moldovan, D; Fechete, R; Demco, D E; Culea, E; Blümich, B; Herrmann, V; Heinz, M

    2011-01-01

    Residual second moment of dipolar interactions M(2) and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M(2). A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the (1)H residual second moment and correlation time <τ(c)>. For the mobile EPDM segments the power-law distribution of correlation function was compared to the exponential correlation function and found inadequate in the long-time regime. In the second approach a log-Gauss distribution for the correlation time was assumed. Furthermore, using an averaged value of the correlation time, the distributions of the residual second moment were determined using an inverse Laplace transform for the entire series of measured samples. The unfilled EPDM sample shows a bimodal distribution of residual second moments, which can be associated to the mobile polymer sub-chains (M(2) ≅ 6.1 rad (2) s(-2)) and the second one associated to the dangling chains M(2) ≅ 5.4 rad(2) s(-2)). By restraining the mobility of bound rubber, the carbon-black fillers induce diversity in the segmental dynamics like the apparition of a distinct mobile component and changes in the distribution of mobile and free-end polymer segments. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. The heterogeneity of segmental dynamics of filled EPDM by 1H transverse relaxation NMR

    NASA Astrophysics Data System (ADS)

    Moldovan, D.; Fechete, R.; Demco, D. E.; Culea, E.; Blümich, B.; Herrmann, V.; Heinz, M.

    2011-01-01

    Residual second moment of dipolar interactions M∼2 and correlation time segmental dynamics distributions were measured by Hahn-echo decays in combination with inverse Laplace transform for a series of unfilled and filled EPDM samples as functions of carbon-black N683 filler content. The fillers-polymer chain interactions which dramatically restrict the mobility of bound rubber modify the dynamics of mobile chains. These changes depend on the filler content and can be evaluated from distributions of M∼2. A dipolar filter was applied to eliminate the contribution of bound rubber. In the first approach the Hahn-echo decays were fitted with a theoretical relationship to obtain the average values of the 1H residual second moment and correlation time <τc>. For the mobile EPDM segments the power-law distribution of correlation function was compared to the exponential correlation function and found inadequate in the long-time regime. In the second approach a log-Gauss distribution for the correlation time was assumed. Furthermore, using an averaged value of the correlation time, the distributions of the residual second moment were determined using an inverse Laplace transform for the entire series of measured samples. The unfilled EPDM sample shows a bimodal distribution of residual second moments, which can be associated to the mobile polymer sub-chains (M∼2≅6.1 rad s) and the second one associated to the dangling chains M∼2≅5.4 rad s). By restraining the mobility of bound rubber, the carbon-black fillers induce diversity in the segmental dynamics like the apparition of a distinct mobile component and changes in the distribution of mobile and free-end polymer segments.

  20. A Distributed Operating System for BMD Applications.

    DTIC Science & Technology

    1982-01-01

    Defense) applications executing on distributed hardware with local and shared memories. The objective was to develop real - time operating system functions...make the Basic Real - Time Operating System , and the set of new EPL language primitives that provide BMD application processes with efficient mechanisms

  1. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006. Growth-collapse and decay-surge evolutions, and geometric Langevin equations, Physica A, 367, 106 - 128.

  2. Covariant extension of the GPD overlap representation at low Fock states

    DOE PAGES

    Chouika, N.; Mezrag, C.; Moutarde, H.; ...

    2017-12-26

    Here, we present a novel approach to compute generalized parton distributions within the lightfront wave function overlap framework. We show how to systematically extend generalized parton distributions computed within the DGLAP region to the ERBL one, fulfilling at the same time both the polynomiality and positivity conditions. We exemplify our method using pion lightfront wave functions inspired by recent results of non-perturbative continuum techniques and algebraic nucleon lightfront wave functions. We also test the robustness of our algorithm on reggeized phenomenological parameterizations. This approach paves the way to a better understanding of the nucleon structure from non-perturbative techniques and tomore » a unification of generalized parton distributions and transverse momentum dependent parton distribution functions phenomenology through lightfront wave functions.« less

  3. A dynamic re-partitioning strategy based on the distribution of key in Spark

    NASA Astrophysics Data System (ADS)

    Zhang, Tianyu; Lian, Xin

    2018-05-01

    Spark is a memory-based distributed data processing framework, has the ability of processing massive data and becomes a focus in Big Data. But the performance of Spark Shuffle depends on the distribution of data. The naive Hash partition function of Spark can not guarantee load balancing when data is skewed. The time of job is affected by the node which has more data to process. In order to handle this problem, dynamic sampling is used. In the process of task execution, histogram is used to count the key frequency distribution of each node, and then generate the global key frequency distribution. After analyzing the distribution of key, load balance of data partition is achieved. Results show that the Dynamic Re-Partitioning function is better than the default Hash partition, Fine Partition and the Balanced-Schedule strategy, it can reduce the execution time of the task and improve the efficiency of the whole cluster.

  4. Extracting Micro-Doppler Radar Signatures from Rotating Targets Using Fourier-Bessel Transform and Time-Frequency Analysis

    DTIC Science & Technology

    2014-10-16

    Time-Frequency analysis, Short-Time Fourier Transform, Wigner Ville Distribution, Fourier Bessel Transform, Fractional Fourier Transform. I...INTRODUCTION Most widely used time-frequency transforms are short-time Fourier Transform (STFT) and Wigner Ville distribution (WVD). In STFT, time and...frequency resolutions are limited by the size of window function used in calculating STFT. For mono-component signals, WVD gives the best time and frequency

  5. A Dual Power Law Distribution for the Stellar Initial Mass Function

    NASA Astrophysics Data System (ADS)

    Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett

    2018-05-01

    We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.

  6. LETTER TO THE EDITOR: Exact energy distribution function in a time-dependent harmonic oscillator

    NASA Astrophysics Data System (ADS)

    Robnik, Marko; Romanovski, Valery G.; Stöckmann, Hans-Jürgen

    2006-09-01

    Following a recent work by Robnik and Romanovski (2006 J. Phys. A: Math. Gen. 39 L35, 2006 Open Syst. Inf. Dyn. 13 197-222), we derive an explicit formula for the universal distribution function of the final energies in a time-dependent 1D harmonic oscillator, whose functional form does not depend on the details of the frequency ω(t) and is closely related to the conservation of the adiabatic invariant. The normalized distribution function is P(x) = \\pi^{-1} (2\\mu^2 - x^2)^{-\\frac{1}{2}} , where x=E_1- \\skew3\\bar{E}_1 ; E1 is the final energy, \\skew3\\bar{E}_1 is its average value and µ2 is the variance of E1. \\skew3\\bar{E}_1 and µ2 can be calculated exactly using the WKB approach to all orders.

  7. Timing in a Variable Interval Procedure: Evidence for a Memory Singularity

    PubMed Central

    Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn

    2013-01-01

    Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783

  8. Lévy flight with absorption: A model for diffusing diffusivity with long tails

    NASA Astrophysics Data System (ADS)

    Jain, Rohit; Sebastian, K. L.

    2017-03-01

    We consider diffusion of a particle in rearranging environment, so that the diffusivity of the particle is a stochastic function of time. In our previous model of "diffusing diffusivity" [Jain and Sebastian, J. Phys. Chem. B 120, 3988 (2016), 10.1021/acs.jpcb.6b01527], it was shown that the mean square displacement of particle remains Fickian, i.e., ∝T at all times, but the probability distribution of particle displacement is not Gaussian at all times. It is exponential at short times and crosses over to become Gaussian only in a large time limit in the case where the distribution of D in that model has a steady state limit which is exponential, i.e., πe(D ) ˜e-D /D0 . In the present study, we model the diffusivity of a particle as a Lévy flight process so that D has a power-law tailed distribution, viz., πe(D ) ˜D-1 -α with 0 <α <1 . We find that in the short time limit, the width of displacement distribution is proportional to √{T }, implying that the diffusion is Fickian. But for long times, the width is proportional to T1 /2 α which is a characteristic of anomalous diffusion. The distribution function for the displacement of the particle is found to be a symmetric stable distribution with a stability index 2 α which preserves its shape at all times.

  9. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  10. [Hazard function and life table: an introduction to the failure time analysis].

    PubMed

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  11. Time Synchronization and Distribution Mechanisms for Space Networks

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Gao, Jay L.; Clare, Loren P.; Mills, David L.

    2011-01-01

    This work discusses research on the problems of synchronizing and distributing time information between spacecraft based on the Network Time Protocol (NTP), where NTP is a standard time synchronization protocol widely used in the terrestrial network. The Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol was designed and developed for synchronizing spacecraft that are in proximity where proximity is less than 100,000 km distant. A particular application is synchronization between a Mars orbiter and rover. Lunar scenarios as well as outer-planet deep space mother-ship-probe missions may also apply. Spacecraft with more accurate time information functions as a time-server, and the other spacecraft functions as a time-client. PITS can be easily integrated and adaptable to the CCSDS Proximity-1 Space Link Protocol with minor modifications. In particular, PITS can take advantage of the timestamping strategy that underlying link layer functionality provides for accurate time offset calculation. The PITS algorithm achieves time synchronization with eight consecutive space network time packet exchanges between two spacecraft. PITS can detect and avoid possible errors from receiving duplicate and out-of-order packets by comparing with the current state variables and timestamps. Further, PITS is able to detect error events and autonomously recover from unexpected events that can possibly occur during the time synchronization and distribution process. This capability achieves an additional level of protocol protection on top of CRC or Error Correction Codes. PITS is a lightweight and efficient protocol, eliminating the needs for explicit frame sequence number and long buffer storage. The PITS protocol is capable of providing time synchronization and distribution services for a more general domain where multiple entities need to achieve time synchronization using a single point-to-point link.

  12. Universality classes of fluctuation dynamics in hierarchical complex systems

    NASA Astrophysics Data System (ADS)

    Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.

    2017-03-01

    A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.

  13. Time-sliced perturbation theory for large scale structure I: general formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blas, Diego; Garny, Mathias; Sibiryakov, Sergey

    2016-07-01

    We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less

  14. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  15. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  16. Non-Fickian dispersion of groundwater age

    PubMed Central

    Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.

    2014-01-01

    We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651

  17. Low-energy ion distribution functions on a magnetically quiet day at geostationary altitude /L = 7/

    NASA Technical Reports Server (NTRS)

    Singh, N.; Raitt, W. J.; Yasuhara, F.

    1982-01-01

    Ion energy and pitch angle distribution functions are examined for a magnetically quiet day using averaged data from ATS 6. For both field-aligned and perpendicular fluxes, the populations have a mixture of characteristic energies, and the distribution functions can be fairly well approximated by Maxwellian distributions over three different energy bands in the range 3-600 eV. Pitch angle distributions varying with local time, and energy distributions are used to compute total ion density. Pitch angle scattering mechanisms responsible for the observed transformation of pitch angle distribution are examined, and it is found that a magnetic noise of a certain power spectral density belonging to the electromagnetic ion cyclotron mode near the ion cyclotron frequency can be effective in trapping the field aligned fluxes by pitch angle scattering.

  18. Effects of the reconnection electric field on crescent electron distribution functions in asymmetric guide field reconnection

    NASA Astrophysics Data System (ADS)

    Bessho, N.; Chen, L. J.; Hesse, M.; Wang, S.

    2017-12-01

    In asymmetric reconnection with a guide field in the Earth's magnetopause, electron motion in the electron diffusion region (EDR) is largely affected by the guide field, the Hall electric field, and the reconnection electric field. The electron motion in the EDR is neither simple gyration around the guide field nor simple meandering motion across the current sheet. The combined meandering motion and gyration has essential effects on particle acceleration by the in-plane Hall electric field (existing only in the magnetospheric side) and the out-of-plane reconnection electric field. We analyze electron motion and crescent-shaped electron distribution functions in the EDR in asymmetric guide field reconnection, and perform 2-D particle-in-cell (PIC) simulations to elucidate the effect of reconnection electric field on electron distribution functions. Recently, we have analytically expressed the acceleration effect due to the reconnection electric field on electron crescent distribution functions in asymmetric reconnection without a guide field (Bessho et al., Phys. Plasmas, 24, 072903, 2017). We extend the theory to asymmetric guide field reconnection, and predict the crescent bulge in distribution functions. Assuming 1D approximation of field variations in the EDR, we derive the time period of oscillatory electron motion (meandering + gyration) in the EDR. The time period is expressed as a hybrid of the meandering period and the gyro period. Due to the guide field, electrons not only oscillate along crescent-shaped trajectories in the velocity plane perpendicular to the antiparallel magnetic fields, but also move along parabolic trajectories in the velocity plane coplanar with magnetic field. The trajectory in the velocity space gradually shifts to the acceleration direction by the reconnection electric field as multiple bounces continue. Due to the guide field, electron distributions for meandering particles are bounded by two paraboloids (or hyperboloids) in the velocity space. We compare theory and PIC simulation results of the velocity shift of crescent distribution functions based on the derived time period of bounce motion in a guide field. Theoretical predictions are applied to electron distributions observed by MMS in magnetopause reconnection to estimate the reconnection electric field.

  19. Longitudinal Distribution of the Functional Feeding Groups of Aquatic Insects in Streams of the Brazilian Cerrado Savanna.

    PubMed

    Brasil, L S; Juen, L; Batista, J D; Pavan, M G; Cabette, H S R

    2014-10-01

    We demonstrate that the distribution of the functional feeding groups of aquatic insects is related to hierarchical patch dynamics. Patches are sites with unique environmental and functional characteristics that are discontinuously distributed in time and space within a lotic system. This distribution predicts that the occurrence of species will be based predominantly on their environmental requirements. We sampled three streams within the same drainage basin in the Brazilian Cerrado savanna, focusing on waterfalls and associated habitats (upstream, downstream), representing different functional zones. We collected 2,636 specimens representing six functional feeding groups (FFGs): brushers, collector-gatherers, collector-filterers, shredders, predators, and scrapers. The frequency of occurrence of these groups varied significantly among environments. This variation appeared to be related to the distinct characteristics of the different habitat patches, which led us to infer that the hierarchical patch dynamics model can best explain the distribution of functional feeding groups in minor lotic environments, such as waterfalls.

  20. Optimal File-Distribution in Heterogeneous and Asymmetric Storage Networks

    NASA Astrophysics Data System (ADS)

    Langner, Tobias; Schindelhauer, Christian; Souza, Alexander

    We consider an optimisation problem which is motivated from storage virtualisation in the Internet. While storage networks make use of dedicated hardware to provide homogeneous bandwidth between servers and clients, in the Internet, connections between storage servers and clients are heterogeneous and often asymmetric with respect to upload and download. Thus, for a large file, the question arises how it should be fragmented and distributed among the servers to grant "optimal" access to the contents. We concentrate on the transfer time of a file, which is the time needed for one upload and a sequence of n downloads, using a set of m servers with heterogeneous bandwidths. We assume that fragments of the file can be transferred in parallel to and from multiple servers. This model yields a distribution problem that examines the question of how these fragments should be distributed onto those servers in order to minimise the transfer time. We present an algorithm, called FlowScaling, that finds an optimal solution within running time {O}(m log m). We formulate the distribution problem as a maximum flow problem, which involves a function that states whether a solution with a given transfer time bound exists. This function is then used with a scaling argument to determine an optimal solution within the claimed time complexity.

  1. Bivariate mass-size relation as a function of morphology as determined by Galaxy Zoo 2 crowdsourced visual classifications

    NASA Astrophysics Data System (ADS)

    Beck, Melanie; Scarlata, Claudia; Fortson, Lucy; Willett, Kyle; Galloway, Melanie

    2016-01-01

    It is well known that the mass-size distribution evolves as a function of cosmic time and that this evolution is different between passive and star-forming galaxy populations. However, the devil is in the details and the precise evolution is still a matter of debate since this requires careful comparison between similar galaxy populations over cosmic time while simultaneously taking into account changes in image resolution, rest-frame wavelength, and surface brightness dimming in addition to properly selecting representative morphological samples.Here we present the first step in an ambitious undertaking to calculate the bivariate mass-size distribution as a function of time and morphology. We begin with a large sample (~3 x 105) of SDSS galaxies at z ~ 0.1. Morphologies for this sample have been determined by Galaxy Zoo crowdsourced visual classifications and we split the sample not only by disk- and bulge-dominated galaxies but also in finer morphology bins such as bulge strength. Bivariate distribution functions are the only way to properly account for biases and selection effects. In particular, we quantify the mass-size distribution with a version of the parametric Maximum Likelihood estimator which has been modified to account for measurement errors as well as upper limits on galaxy sizes.

  2. Improving Project Management with Simulation and Completion Distribution Functions

    NASA Technical Reports Server (NTRS)

    Cates, Grant R.

    2004-01-01

    Despite the critical importance of project completion timeliness, management practices in place today remain inadequate for addressing the persistent problem of project completion tardiness. A major culprit in late projects is uncertainty, which most, if not all, projects are inherently subject to. This uncertainty resides in the estimates for activity durations, the occurrence of unplanned and unforeseen events, and the availability of critical resources. In response to this problem, this research developed a comprehensive simulation based methodology for conducting quantitative project completion time risk analysis. It is called the Project Assessment by Simulation Technique (PAST). This new tool enables project stakeholders to visualize uncertainty or risk, i.e. the likelihood of their project completing late and the magnitude of the lateness, by providing them with a completion time distribution function of their projects. Discrete event simulation is used within PAST to determine the completion distribution function for the project of interest. The simulation is populated with both deterministic and stochastic elements. The deterministic inputs include planned project activities, precedence requirements, and resource requirements. The stochastic inputs include activity duration growth distributions, probabilities for events that can impact the project, and other dynamic constraints that may be placed upon project activities and milestones. These stochastic inputs are based upon past data from similar projects. The time for an entity to complete the simulation network, subject to both the deterministic and stochastic factors, represents the time to complete the project. Repeating the simulation hundreds or thousands of times allows one to create the project completion distribution function. The Project Assessment by Simulation Technique was demonstrated to be effective for the on-going NASA project to assemble the International Space Station. Approximately $500 million per month is being spent on this project, which is scheduled to complete by 2010. NASA project stakeholders participated in determining and managing completion distribution functions produced from PAST. The first result was that project stakeholders improved project completion risk awareness. Secondly, using PAST, mitigation options were analyzed to improve project completion performance and reduce total project cost.

  3. Time behavior of solar flare particles to 5 AU

    NASA Technical Reports Server (NTRS)

    Haffner, J. W.

    1972-01-01

    A simple model of solar flare radiation event particle transport is developed to permit the calculation of fluxes and related quantities as a function of distance from the sun (R). This model assumes the particles spiral around the solar magnetic field lines with a constant pitch angle. The particle angular distributions and onset plus arrival times as functions of energy at 1 AU agree with observations if the pitch angle distribution peaks near 90 deg. As a consequence the time dependence factor is essentially proportional to R/1.7, (R in AU), and the event flux is proportional to R/2.

  4. The nitrate response of a lowland catchment and groundwater travel times

    NASA Astrophysics Data System (ADS)

    van der Velde, Ype; Rozemeijer, Joachim; de Rooij, Gerrit; van Geer, Frans

    2010-05-01

    Intensive agriculture in lowland catchments causes eutrophication of downstream waters. To determine effective measures to reduce the nutrient loads from upstream lowland catchments, we need to understand the origin of long-term and daily variations in surface water nutrient concentrations. Surface water concentrations are often linked to travel time distributions of water passing through the saturated and unsaturated soil of the contributing catchment. This distribution represents the contact time over which sorption, desorption and degradation takes place. However, travel time distributions are strongly influenced by processes like tube drain flow, overland flow and the dynamics of draining ditches and streams and therefore exhibit strong daily and seasonal variations. The study we will present is situated in the 6.6 km2 Hupsel brook catchment in The Netherlands. In this catchment nitrate and chloride concentrations have been intensively monitored for the past 26 years under steadily decreasing agricultural inputs. We described the complicated dynamics of subsurface water fluxes as streams, ditches and tube drains locally switch between active or passive depending on the ambient groundwater level by a groundwater model with high spatial and temporal resolutions. A transient particle tracking approach is used to derive a unique catchment-scale travel time distribution for each day during the 26 year model period. These transient travel time distributions are not smooth distributions, but distributions that are strongly spiked reflecting the contribution of past rainfall events to the current discharge. We will show that a catchment-scale mass response function approach that only describes catchment-scale mixing and degradation suffices to accurately reproduce observed chloride and nitrate surface water concentrations as long as the mass response functions include the dynamics of travel time distributions caused by the highly variable connectivity of the surface water network.

  5. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  6. Dynamics of transit times and StorAge Selection functions in four forested catchments from stable isotope data

    NASA Astrophysics Data System (ADS)

    Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian

    2017-04-01

    Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.

  7. Deformation dependence of proton decay rates and angular distributions in a time-dependent approach

    NASA Astrophysics Data System (ADS)

    Carjan, N.; Talou, P.; Strottman, D.

    1998-12-01

    A new, time-dependent, approach to proton decay from axially symmetric deformed nuclei is presented. The two-dimensional time-dependent Schrödinger equation for the interaction between the emitted proton and the rest of the nucleus is solved numerically for well defined initial quasi-stationary proton states. Applied to the hypothetical proton emission from excited states in deformed nuclei of 208Pb, this approach shows that the problem cannot be reduced to one dimension. There are in general more than one directions of emission with wide distributions around them, determined mainly by the quantum numbers of the initial wave function rather than by the potential landscape. The distribution of the "residual" angular momentum and its variation in time play a major role in the determination of the decay rate. In a couple of cases, no exponential decay was found during the calculated time evolution (2×10-21 sec) although more than half of the wave function escaped during that time.

  8. Features of the use of time-frequency distributions for controlling the mixture-producing aggregate

    NASA Astrophysics Data System (ADS)

    Fedosenkov, D. B.; Simikova, A. A.; Fedosenkov, B. A.

    2018-05-01

    The paper submits and argues the information on filtering properties of the mixing unit as a part of the mixture-producing aggregate. Relevant theoretical data concerning a channel transfer function of the mixing unit and multidimensional material flow signals are adduced here. Note that ordinary one-dimensional material flow signals are defined in terms of time-frequency distributions of Cohen’s class representations operating with Gabor wavelet functions. Two time-frequencies signal representations are written about in the paper to show how one can solve controlling problems as applied to mixture-producing systems: they are the so-called Rihaczek and Wigner-Ville distributions. In particular, the latter illustrates low-pass filtering properties that are practically available in any of low-pass elements of a physical system.

  9. Radar Imaging Using The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.

    1989-12-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.

  10. System Lifetimes, The Memoryless Property, Euler's Constant, and Pi

    ERIC Educational Resources Information Center

    Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon

    2013-01-01

    A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…

  11. Modeling pore corrosion in normally open gold- plated copper connectors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien

    2008-09-01

    The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less

  12. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.

    2018-01-01

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.

  13. Modelling the line shape of very low energy peaks of positron beam induced secondary electrons measured using a time of flight spectrometer

    NASA Astrophysics Data System (ADS)

    Fairchild, A. J.; Chirayath, V. A.; Gladen, R. W.; Chrysler, M. D.; Koymen, A. R.; Weiss, A. H.

    2017-01-01

    In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed.

  14. Time-Frequency Based Instantaneous Frequency Estimation of Sparse Signals from an Incomplete Set of Samples

    DTIC Science & Technology

    2014-06-17

    100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with

  15. Modeling Magnetotail Ion Distributions with Global Magnetohydrodynamic and Ion Trajectory Calculations

    NASA Technical Reports Server (NTRS)

    El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Peroomian, V.; Frank, L. A.; Paterson, W. R.; Bosqued, J. M.

    1998-01-01

    On February 9, 1995, the Comprehensive Plasma Instrumentation (CPI) on the Geotail spacecraft observed a complex, structured ion distribution function near the magnetotail midplane at x approximately -30 R(sub E). On this same day the Wind spacecraft observed a quiet solar wind and an interplanetary magnetic field (IMF) that was northward for more than five hours, and an IMF B(sub y) component with a magnitude comparable to that of the RAF B(sub z) component. In this study, we determined the sources of the ions in this distribution function by following approximately 90,000 ion trajectories backward in time, using the time-dependent electric and magnetic fields obtained from a global MHD simulation. The Wind observations were used as input for the MHD model. The ion distribution function observed by Geotail at 1347 UT was found to consist primarily of particles from the dawn side low latitude boundary layer (LLBL) and from the dusk side LLBL; fewer than 2% of the particles originated in the ionosphere.

  16. Time-dependent transport of energetic particles in magnetic turbulence: computer simulations versus analytical theory

    NASA Astrophysics Data System (ADS)

    Arendt, V.; Shalchi, A.

    2018-06-01

    We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.

  17. Linking age, survival, and transit time distributions

    NASA Astrophysics Data System (ADS)

    Calabrese, Salvatore; Porporato, Amilcare

    2015-10-01

    Although the concepts of age, survival, and transit time have been widely used in many fields, including population dynamics, chemical engineering, and hydrology, a comprehensive mathematical framework is still missing. Here we discuss several relationships among these quantities by starting from the evolution equation for the joint distribution of age and survival, from which the equations for age and survival time readily follow. It also becomes apparent how the statistical dependence between age and survival is directly related to either the age dependence of the loss function or the survival-time dependence of the input function. The solution of the joint distribution equation also allows us to obtain the relationships between the age at exit (or death) and the survival time at input (or birth), as well as to stress the symmetries of the various distributions under time reversal. The transit time is then obtained as a sum of the age and survival time, and its properties are discussed along with the general relationships between their mean values. The special case of steady state case is analyzed in detail. Some examples, inspired by hydrologic applications, are presented to illustrate the theory with the specific results. This article was corrected on 11 Nov 2015. See the end of the full text for details.

  18. A practical divergence measure for survival distributions that can be estimated from Kaplan-Meier curves.

    PubMed

    Cox, Trevor F; Czanner, Gabriela

    2016-06-30

    This paper introduces a new simple divergence measure between two survival distributions. For two groups of patients, the divergence measure between their associated survival distributions is based on the integral of the absolute difference in probabilities that a patient from one group dies at time t and a patient from the other group survives beyond time t and vice versa. In the case of non-crossing hazard functions, the divergence measure is closely linked to the Harrell concordance index, C, the Mann-Whitney test statistic and the area under a receiver operating characteristic curve. The measure can be used in a dynamic way where the divergence between two survival distributions from time zero up to time t is calculated enabling real-time monitoring of treatment differences. The divergence can be found for theoretical survival distributions or can be estimated non-parametrically from survival data using Kaplan-Meier estimates of the survivor functions. The estimator of the divergence is shown to be generally unbiased and approximately normally distributed. For the case of proportional hazards, the constituent parts of the divergence measure can be used to assess the proportional hazards assumption. The use of the divergence measure is illustrated on the survival of pancreatic cancer patients. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. NMR relaxation in natural soils: Fast Field Cycling and T1-T2 Determination by IR-MEMS

    NASA Astrophysics Data System (ADS)

    Haber-Pohlmeier, S.; Pohlmeier, A.; Stapf, S.; van Dusschoten, D.

    2009-04-01

    Soils are natural porous media of highest importance for food production and sustainment of water resources. For these functions, prominent properties are their ability of water retainment and transport, which are mainly controlled by pore size distribution. The latter is related to NMR relaxation times of water molecules, of which the longitudinal relaxation time can be determined non-invasively by fast-field cycling relaxometry (FFC) and both are obtainable by inversion recovery - multi-echo- imaging (IR-MEMS) methods. The advantage of the FFC method is the determination of the field dependent dispersion of the spin-lattice relaxation rate, whereas MRI at high field is capable of yielding spatially resolved T1 and T2 times. Here we present results of T1- relaxation time distributions of water in three natural soils, obtained by the analysis of FFC data by means of the inverse Laplace transformation (CONTIN)1. Kaldenkirchen soil shows relatively broad bimodal distribution functions D(T1) which shift to higher relaxation rates with increasing relaxation field. These data are compared to spatially resolved T1- and T2 distributions, obtained by IR-MEMS. The distribution of T1 corresponds well to that obtained by FFC.

  20. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  1. Analysis and attenuation of artifacts caused by spatially and temporally correlated noise sources in Green's function estimates

    NASA Astrophysics Data System (ADS)

    Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.

    2016-12-01

    Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.

  2. The rates and time-delay distribution of multiply imaged supernovae behind lensing clusters

    NASA Astrophysics Data System (ADS)

    Li, Xue; Hjorth, Jens; Richard, Johan

    2012-11-01

    Time delays of gravitationally lensed sources can be used to constrain the mass model of a deflector and determine cosmological parameters. We here present an analysis of the time-delay distribution of multiply imaged sources behind 17 strong lensing galaxy clusters with well-calibrated mass models. We find that for time delays less than 1000 days, at z = 3.0, their logarithmic probability distribution functions are well represented by P(log Δt) = 5.3 × 10-4Δttilde beta/M2502tilde beta, with tilde beta = 0.77, where M250 is the projected cluster mass inside 250 kpc (in 1014M⊙), and tilde beta is the power-law slope of the distribution. The resultant probability distribution function enables us to estimate the time-delay distribution in a lensing cluster of known mass. For a cluster with M250 = 2 × 1014M⊙, the fraction of time delays less than 1000 days is approximately 3%. Taking Abell 1689 as an example, its dark halo and brightest galaxies, with central velocity dispersions σ>=500kms-1, mainly produce large time delays, while galaxy-scale mass clumps are responsible for generating smaller time delays. We estimate the probability of observing multiple images of a supernova in the known images of Abell 1689. A two-component model of estimating the supernova rate is applied in this work. For a magnitude threshold of mAB = 26.5, the yearly rate of Type Ia (core-collapse) supernovae with time delays less than 1000 days is 0.004±0.002 (0.029±0.001). If the magnitude threshold is lowered to mAB ~ 27.0, the rate of core-collapse supernovae suitable for time delay observation is 0.044±0.015 per year.

  3. On the optimal identification of tag sets in time-constrained RFID configurations.

    PubMed

    Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel

    2011-01-01

    In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.

  4. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  5. Microscopic modeling of gas-surface scattering: II. Application to argon atom adsorption on a platinum (111) surface

    NASA Astrophysics Data System (ADS)

    Filinov, A.; Bonitz, M.; Loffhagen, D.

    2018-06-01

    A new combination of first principle molecular dynamics (MD) simulations with a rate equation model presented in the preceding paper (paper I) is applied to analyze in detail the scattering of argon atoms from a platinum (111) surface. The combined model is based on a classification of all atom trajectories according to their energies into trapped, quasi-trapped and scattering states. The number of particles in each of the three classes obeys coupled rate equations. The coefficients in the rate equations are the transition probabilities between these states which are obtained from MD simulations. While these rates are generally time-dependent, after a characteristic time scale t E of several tens of picoseconds they become stationary allowing for a rather simple analysis. Here, we investigate this time scale by analyzing in detail the temporal evolution of the energy distribution functions of the adsorbate atoms. We separately study the energy loss distribution function of the atoms and the distribution function of in-plane and perpendicular energy components. Further, we compute the sticking probability of argon atoms as a function of incident energy, angle and lattice temperature. Our model is important for plasma-surface modeling as it allows to extend accurate simulations to longer time scales.

  6. Distributed numerical controllers

    NASA Astrophysics Data System (ADS)

    Orban, Peter E.

    2001-12-01

    While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.

  7. Automatic transfer function design for medical visualization using visibility distributions and projective color mapping.

    PubMed

    Cai, Lile; Tay, Wei-Liang; Nguyen, Binh P; Chui, Chee-Kong; Ong, Sim-Heng

    2013-01-01

    Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Multiscaling properties of coastal waters particle size distribution from LISST in situ measurements

    NASA Astrophysics Data System (ADS)

    Pannimpullath Remanan, R.; Schmitt, F. G.; Loisel, H.; Mériaux, X.

    2013-12-01

    An eulerian high frequency sampling of particle size distribution (PSD) is performed during 5 tidal cycles (65 hours) in a coastal environment of the eastern English Channel at 1 Hz. The particle data are recorded using a LISST-100x type C (Laser In Situ Scattering and Transmissometry, Sequoia Scientific), recording volume concentrations of particles having diameters ranging from 2.5 to 500 mu in 32 size classes in logarithmic scale. This enables the estimation at each time step (every second) of the probability density function of particle sizes. At every time step, the pdf of PSD is hyperbolic. We can thus estimate PSD slope time series. Power spectral analysis shows that the mean diameter of the suspended particles is scaling at high frequencies (from 1s to 1000s). The scaling properties of particle sizes is studied by computing the moment function, from the pdf of the size distribution. Moment functions at many different time scales (from 1s to 1000 s) are computed and their scaling properties considered. The Shannon entropy at each time scale is also estimated and is related to other parameters. The multiscaling properties of the turbidity (coefficient cp computed from the LISST) are also consider on the same time scales, using Empirical Mode Decomposition.

  9. Gravitational lensing, time delay, and gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Mao, Shude

    1992-01-01

    The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.

  10. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  11. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  12. Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph

    Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations aremore » performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.« less

  13. Statistical distribution of time to crack initiation and initial crack size using service data

    NASA Technical Reports Server (NTRS)

    Heller, R. A.; Yang, J. N.

    1977-01-01

    Crack growth inspection data gathered during the service life of the C-130 Hercules airplane were used in conjunction with a crack propagation rule to estimate the distribution of crack initiation times and of initial crack sizes. A Bayesian statistical approach was used to calculate the fraction of undetected initiation times as a function of the inspection time and the reliability of the inspection procedure used.

  14. Acceleration of O+ from the cusp to the plasma sheet

    NASA Astrophysics Data System (ADS)

    Liao, J.; Kistler, L. M.; Mouikis, C. G.; Klecker, B.; Dandouras, I.

    2015-02-01

    Heavy ions from the ionosphere that are accelerated in the cusp/cleft have been identified as a direct source for the hot plasma in the plasma sheet. However, the details of the acceleration and transport that transforms the originally cold ions into the hot plasma sheet population are not fully understood. The polar orbit of the Cluster satellites covers the main transport path of the O+ from the cusp to the plasma sheet, so Cluster is ideal for tracking its velocity changes. However, because the cusp outflow is dispersed according to its velocity as it is transported to the tail, due to the velocity filter effect, the observed changes in beam velocity over the Cluster orbit may simply be the result of the spacecraft accessing different spatial regions and not necessarily evidence of acceleration. Using the Cluster Ion Spectrometry/Composition Distribution Function instrument onboard Cluster, we compare the distribution function of streaming O+ in the tail lobes with the initial distribution function observed over the cusp and reveal that the observations of energetic streaming O+ in the lobes around -20 RE are predominantly due to the velocity filter effect during nonstorm times. During storm times, the cusp distribution is further accelerated. In the plasma sheet boundary layer, however, the average O+ distribution function is above the upper range of the outflow distributions at the same velocity during both storm and nonstorm times, indicating that acceleration has taken place. Some of the velocity increase is in the direction perpendicular to the magnetic field, indicating that the E × B velocity is enhanced. However, there is also an increase in the parallel direction, which could be due to nonadiabatic acceleration at the boundary or wave heating.

  15. 3D glasma initial state for relativistic heavy ion collisions

    DOE PAGES

    Schenke, Björn; Schlichting, Sören

    2016-10-13

    We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.

  16. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  17. Detecting background changes in environments with dynamic foreground by separating probability distribution function mixtures using Pearson's method of moments

    NASA Astrophysics Data System (ADS)

    Jenkins, Colleen; Jordan, Jay; Carlson, Jeff

    2007-02-01

    This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Hai P.; Cambier, Jean -Luc

    Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less

  19. BINARY CORRELATIONS IN IONIZED GASES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balescu, R.; Taylor, H.S.

    1961-01-01

    An equation of evolution for the binary distribution function in a classical homogeneous, nonequilibrium plasma was derived. It is shown that the asymptotic (long-time) solution of this equation is the Debye distribution, thus providing a rigorous dynamical derivation of the equilibrium distribution. This proof is free from the fundamental conceptual difficulties of conventional equilibrium derivations. Out of equilibrium, a closed formula was obtained for the long living correlations, in terms of the momentum distribution function. These results should form an appropriate starting point for a rigorous theory of transport phenomena in plasmas, including the effect of molecular correlations. (auth)

  20. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  1. Sediment storage time in a simulated meandering river's floodplain, comparisons of point bar and overbank deposits

    NASA Astrophysics Data System (ADS)

    Ackerman, T. R.; Pizzuto, J. E.

    2016-12-01

    Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.

  2. Transverse-momentum-dependent quark distribution functions of spin-one targets: Formalism and covariant calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ninomiya, Yu; Bentz, Wolfgang; Cloet, Ian C.

    In this paper, we present a covariant formulation and model calculations of the leading-twist time-reversal even transverse-momentum-dependent quark distribution functions (TMDs) for a spin-one target. Emphasis is placed on a description of these three-dimensional distribution functions which is independent of any constraints on the spin quantization axis. We apply our covariant spin description to all nine leading-twist time-reversal even ρ meson TMDs in the framework provided by the Nambu–Jona-Lasinio model, incorporating important aspects of quark confinement via the infrared cutoff in the proper-time regularization scheme. In particular, the behaviors of the three-dimensional TMDs in a tensor polarized spin-one hadron aremore » illustrated. Sum rules and positivity constraints are discussed in detail. Our results do not exhibit the familiar Gaussian behavior in the transverse momentum, and other results of interest include the finding that the tensor polarized TMDs—associated with spin-one hadrons—are very sensitive to quark orbital angular momentum, and that the TMDs associated with the quark operator γ +γ Tγ 5 would vanish were it not for dynamical chiral symmetry breaking. In addition, we find that 44% of the ρ meson's spin is carried by the orbital angular momentum of the quarks, and that the magnitude of the tensor polarized quark distribution function is about 30% of the unpolarized quark distribution. Finally, a qualitative comparison between our results for the tensor structure of a quark-antiquark bound state is made to existing experimental and theoretical results for the two-nucleon (deuteron) bound state.« less

  3. Transverse-momentum-dependent quark distribution functions of spin-one targets: Formalism and covariant calculations

    DOE PAGES

    Ninomiya, Yu; Bentz, Wolfgang; Cloet, Ian C.

    2017-10-24

    In this paper, we present a covariant formulation and model calculations of the leading-twist time-reversal even transverse-momentum-dependent quark distribution functions (TMDs) for a spin-one target. Emphasis is placed on a description of these three-dimensional distribution functions which is independent of any constraints on the spin quantization axis. We apply our covariant spin description to all nine leading-twist time-reversal even ρ meson TMDs in the framework provided by the Nambu–Jona-Lasinio model, incorporating important aspects of quark confinement via the infrared cutoff in the proper-time regularization scheme. In particular, the behaviors of the three-dimensional TMDs in a tensor polarized spin-one hadron aremore » illustrated. Sum rules and positivity constraints are discussed in detail. Our results do not exhibit the familiar Gaussian behavior in the transverse momentum, and other results of interest include the finding that the tensor polarized TMDs—associated with spin-one hadrons—are very sensitive to quark orbital angular momentum, and that the TMDs associated with the quark operator γ +γ Tγ 5 would vanish were it not for dynamical chiral symmetry breaking. In addition, we find that 44% of the ρ meson's spin is carried by the orbital angular momentum of the quarks, and that the magnitude of the tensor polarized quark distribution function is about 30% of the unpolarized quark distribution. Finally, a qualitative comparison between our results for the tensor structure of a quark-antiquark bound state is made to existing experimental and theoretical results for the two-nucleon (deuteron) bound state.« less

  4. Unraveling hadron structure with generalized parton distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrei Belitsky; Anatoly Radyushkin

    2004-10-01

    The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling andmore » QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.« less

  5. Staying on Task: Age-Related Changes in the Relationship Between Executive Functioning and Response Time Consistency.

    PubMed

    Vasquez, Brandon P; Binns, Malcolm A; Anderson, Nicole D

    2016-03-01

    Little is known about the relationship of executive functioning with age-related increases in response time (RT) distribution indices (intraindividual standard deviation [ISD], and ex-Gaussian parameters mu, sigma, tau). The goals of this study were to (a) replicate findings of age-related changes in response time distribution indices during an engaging touch-screen RT task and (b) investigate age-related changes in the relationship between executive functioning and RT distribution indices. Healthy adults (24 young [aged 18-30], 24 young-old [aged 65-74], and 24 old-old [aged 75-85]) completed a touch-screen attention task and a battery of neuropsychological tests. The relationships between RT performance and executive functions were examined with structural equation modeling (SEM). ISD, mu, and tau, but not sigma, increased with age. SEM revealed tau as the most salient RT index associated with neuropsychological measures of executive functioning. Further analysis demonstrated that correlations between tau and a weighted executive function composite were significant only in the old-old group. Our results replicate findings of greater RT inconsistency in older adults and reveal that executive functioning is related to tau in adults aged 75-85. These results support literature identifying tau as a marker of cognitive control, which deteriorates in old age. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Physicochemical Characterization of Capstone Depleted Uranium Aerosols II: Particle Size Distributions as a Function of Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Yung-Sung; Kenoyer, Judson L.; Guilmette, Raymond A.

    2009-03-01

    The Capstone Depleted Uranium (DU) Aerosol Study, which generated and characterized aerosols containing depleted uranium from perforation of armored vehicles with large-caliber DU penetrators, incorporated a sampling protocol to evaluated particle size distributions. Aerosol particle size distribution is an important parameter that influences aerosol transport and deposition processes as well as the dosimetry of the inhaled particles. These aerosols were collected on cascade impactor substrates using a pre-established time sequence following the firing event to analyze the uranium concentration and particle size of the aerosols as a function of time. The impactor substrates were analyzed using beta spectrometry, and themore » derived uranium content of each served as input to the evaluation of particle size distributions. Activity median aerodynamic diameters (AMADs) of the particle size distributions were evaluated using unimodal and bimodal models. The particle size data from the impactor measurements was quite variable. Most size distributions measured in the test based on activity had bimodal size distributions with a small particle size mode in the range of between 0.2 and 1.2 um and a large size mode between 2 and 15 um. In general, the evolution of particle size over time showed an overall decrease of average particle size from AMADs of 5 to 10 um shortly after perforation to around 1 um at the end of the 2-hr sampling period. The AMADs generally decreased over time because of settling. Additionally, the median diameter of the larger size mode decreased with time. These results were used to estimate the dosimetry of inhaled DU particles.« less

  7. Visualization and understanding of the granulation liquid mixing and distribution during continuous twin screw granulation using NIR chemical imaging.

    PubMed

    Vercruysse, Jurgen; Toiviainen, Maunu; Fonteyne, Margot; Helkimo, Niko; Ketolainen, Jarkko; Juuti, Mikko; Delaet, Urbain; Van Assche, Ivo; Remon, Jean Paul; Vervaet, Chris; De Beer, Thomas

    2014-04-01

    Over the last decade, there has been increased interest in the application of twin screw granulation as a continuous wet granulation technique for pharmaceutical drug formulations. However, the mixing of granulation liquid and powder material during the short residence time inside the screw chamber and the atypical particle size distribution (PSD) of granules produced by twin screw granulation is not yet fully understood. Therefore, this study aims at visualizing the granulation liquid mixing and distribution during continuous twin screw granulation using NIR chemical imaging. In first instance, the residence time of material inside the barrel was investigated as function of screw speed and moisture content followed by the visualization of the granulation liquid distribution as function of different formulation and process parameters (liquid feed rate, liquid addition method, screw configuration, moisture content and barrel filling degree). The link between moisture uniformity and granule size distributions was also studied. For residence time analysis, increased screw speed and lower moisture content resulted to a shorter mean residence time and narrower residence time distribution. Besides, the distribution of granulation liquid was more homogenous at higher moisture content and with more kneading zones on the granulator screws. After optimization of the screw configuration, a two-level full factorial experimental design was performed to evaluate the influence of moisture content, screw speed and powder feed rate on the mixing efficiency of the powder and liquid phase. From these results, it was concluded that only increasing the moisture content significantly improved the granulation liquid distribution. This study demonstrates that NIR chemical imaging is a fast and adequate measurement tool for allowing process visualization and hence for providing better process understanding of a continuous twin screw granulation system. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Statistics of baryon correlation functions in lattice QCD

    NASA Astrophysics Data System (ADS)

    Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration

    2017-12-01

    A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.

  9. Wireless cellular networks with Pareto-distributed call holding times

    NASA Astrophysics Data System (ADS)

    Rodriguez-Dagnino, Ramon M.; Takagi, Hideaki

    2001-07-01

    Nowadays, there is a growing interest in providing internet to mobile users. For instance, NTT DoCoMo in Japan deploys an important mobile phone network with that offers the Internet service, named 'i-mode', to more than 17 million subscribers. Internet traffic measurements show that the session duration of Call Holding Time (CHT) has probability distributions with heavy-tails, which tells us that they depart significantly from the traffic statistics of traditional voice services. In this environment, it is particularly important to know the number of handovers during a call for a network designer to make an appropriate dimensioning of virtual circuits for a wireless cell. The handover traffic has a direct impact on the Quality of Service (QoS); e.g. the service disruption due to the handover failure may significantly degrade the specified QoS of time-constrained services. In this paper, we first study the random behavior of the number of handovers during a call, where we assume that the CHT are Pareto distributed (heavy-tail distribution), and the Cell Residence Times (CRT) are exponentially distributed. Our approach is based on renewal theory arguments. We present closed-form formulae for the probability mass function (pmf) of the number of handovers during a Pareto distributed CHT, and obtain the probability of call completion as well as handover rates. Most of the formulae are expressed in terms of the Whittaker's function. We compare the Pareto case with cases of $k(subscript Erlang and hyperexponential distributions for the CHT.

  10. Critical Conditions for Liquid Chromatography of Statistical Copolymers: Functionality Type and Composition Distribution Characterization by UP-LCCC/ESI-MS.

    PubMed

    Epping, Ruben; Panne, Ulrich; Falkenhagen, Jana

    2017-02-07

    Statistical ethylene oxide (EO) and propylene oxide (PO) copolymers of different monomer compositions and different average molar masses additionally containing two kinds of end groups (FTD) were investigated by ultra high pressure liquid chromatography under critical conditions (UP-LCCC) combined with electrospray ionization time-of flight mass spectrometry (ESI-TOF-MS). Theoretical predictions of the existence of a critical adsorption point (CPA) for statistical copolymers with a given chemical and sequence distribution1 could be studied and confirmed. A fundamentally new approach to determine these critical conditions in a copolymer, alongside the inevitable chemical composition distribution (CCD), with mass spectrometric detection, is described. The shift of the critical eluent composition with the monomer composition of the polymers was determined. Due to the broad molar mass distribution (MMD) and the presumed existence of different end group functionalities as well as monomer sequence distribution (MSD), gradient separation only by CCD was not possible. Therefore, isocratic separation conditions at the CPA of definite CCD fractions were developed. Although the various present distributions partly superimposed the separation process, the goal of separation by end group functionality was still achieved on the basis of the additional dimension of ESI-TOF-MS. The existence of HO-H besides the desired allylO-H end group functionalities was confirmed and their amount estimated. Furthermore, indications for a MSD were found by UPLC/MS/MS measurements. This approach offers for the first time the possibility to obtain a fingerprint of a broad distributed statistical copolymer including MMD, FTD, CCD, and MSD.

  11. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  12. Optimization of removal function in computer controlled optical surfacing

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Guo, Peiji; Ren, Jianfeng

    2010-10-01

    The technical principle of computer controlled optical surfacing (CCOS) and the common method of optimizing removal function that is used in CCOS are introduced in this paper. A new optimizing method time-sharing synthesis of removal function is proposed to solve problems of the removal function being far away from Gaussian type and slow approaching of the removal function error that encountered in the mode of planet motion or translation-rotation. Detailed time-sharing synthesis of using six removal functions is discussed. For a given region on the workpiece, six positions are selected as the centers of the removal function; polishing tool controlled by the executive system of CCOS revolves around each centre to complete a cycle in proper order. The overall removal function obtained by the time-sharing process is the ratio of total material removal in six cycles to time duration of the six cycles, which depends on the arrangement and distribution of the six removal functions. Simulations on the synthesized overall removal functions under two different modes of motion, i.e., planet motion and translation-rotation are performed from which the optimized combination of tool parameters and distribution of time-sharing synthesis removal functions are obtained. The evaluation function when optimizing is determined by an approaching factor which is defined as the ratio of the material removal within the area of half of the polishing tool coverage from the polishing center to the total material removal within the full polishing tool coverage area. After optimization, it is found that the optimized removal function obtained by time-sharing synthesis is closer to the ideal Gaussian type removal function than those by the traditional methods. The time-sharing synthesis method of the removal function provides an efficient way to increase the convergence speed of the surface error in CCOS for the fabrication of aspheric optical surfaces, and to reduce the intermediate- and high-frequency error.

  13. Transverse parton distribution functions at next-to-next-to-leading order: the quark-to-quark case.

    PubMed

    Gehrmann, Thomas; Lübbert, Thomas; Yang, Li Lin

    2012-12-14

    We present a calculation of the perturbative quark-to-quark transverse parton distribution function at next-to-next-to-leading order based on a gauge invariant operator definition. We demonstrate for the first time that such a definition works beyond the first nontrivial order. We extract from our calculation the coefficient functions relevant for a next-to-next-to-next-to-leading logarithmic Q(T) resummation in a large class of processes at hadron colliders.

  14. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  15. Cluster-cluster aggregation with particle replication and chemotaxy: a simple model for the growth of animal cells in culture

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Martins, M. L.

    2010-09-01

    Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.

  16. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  17. What are the Shapes of Response Time Distributions in Visual Search?

    PubMed Central

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumitru, Adrian; Skokov, Vladimir

    The conventional and linearly polarized Weizsäcker-Williams gluon distributions at small x are defined from the two-point function of the gluon field in light-cone gauge. They appear in the cross section for dijet production in deep inelastic scattering at high energy. We determine these functions in the small-x limit from solutions of the JIMWLK evolution equations and show that they exhibit approximate geometric scaling. Also, we discuss the functional distributions of these WW gluon distributions over the JIMWLK ensemble at rapidity Y ~ 1/αs. These are determined by a 2d Liouville action for the logarithm of the covariant gauge function g2trmore » A+(q)A+(-q). For transverse momenta on the order of the saturation scale we observe large variations across configurations (evolution trajectories) of the linearly polarized distribution up to several times its average, and even to negative values.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  20. Angular distribution of scission neutrons studied with time-dependent Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Wada, Takahiro; Asano, Tomomasa; Carjan, Nicolae

    2018-03-01

    We investigate the angular distribution of scission neutrons taking account of the effects of fission fragments. The time evolution of the wave function of the scission neutron is obtained by integrating the time-dependent Schrodinger equation numerically. The effects of the fission fragments are taken into account by means of the optical potentials. The angular distribution is strongly modified by the presence of the fragments. In the case of asymmetric fission, it is found that the heavy fragment has stronger effects. Dependence on the initial distribution and on the properties of fission fragments is discussed. We also discuss on the treatment of the boundary to avoid artificial reflections

  1. Characteristics of ion distribution functions in dipolarizing flux bundles: Event studies

    NASA Astrophysics Data System (ADS)

    Runov, A.; Angelopoulos, V.; Artemyev, A.; Birn, J.; Pritchett, P. L.; Zhou, X.-Z.

    2017-06-01

    Taking advantage of multipoint observations from a repeating configuration of the five Time History of Events and Macroscale Interactions during Substorms (THEMIS) probes separated by 1 to 2 Earth radii (RE) along X, Y, and Z in the geocentric solar magnetospheric system (GSM), we study ion distribution functions collected by the probes during three dipolarizing flux bundle (DFB) events observed at geocentric distances 9 < R < 14 RE. By comparing these probes' observations, we characterize changes in the ion distribution functions with respect to probe separation along the X and Y GSM directions and |Bx| levels, which characterize the distance from the neutral sheet. We found that the characteristics of the ion distribution functions strongly depended on the |Bx| level, whereas changes with respect to X and Y were minor. In all three events, ion distribution functions f(v) observed inside DFBs were organized by magnetic and electric fields. The probes near the magnetic equator observed perpendicular anisotropy of the phase space density in the range between thermal energy and twice the thermal energy, although the distribution in the ambient plasma sheet was isotropic. The anisotropic ion distribution in DFBs injected toward the inner magnetosphere may provide the free energy for waves and instabilities, which are important elements of particle energization.

  2. Green's Functions in Space and Time.

    ERIC Educational Resources Information Center

    Rowe, E. G. Peter

    1979-01-01

    Gives a sketch of some topics in distribution theory that is technically simple, yet provides techniques for handling the partial differential equations satisfied by the most important Green's functions in physics. (Author/GA)

  3. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  4. Conservative algorithms for non-Maxwellian plasma kinetics

    DOE PAGES

    Le, Hai P.; Cambier, Jean -Luc

    2017-12-08

    Here, we present a numerical model and a set of conservative algorithms for Non-Maxwellian plasma kinetics with inelastic collisions. These algorithms self-consistently solve for the time evolution of an isotropic electron energy distribution function interacting with an atomic state distribution function of an arbitrary number of levels through collisional excitation, deexcitation, as well as ionization and recombination. Electron-electron collisions, responsible for thermalization of the electron distribution, are also included in the model. The proposed algorithms guarantee mass/charge and energy conservation in a single step, and is applied to the case of non-uniform gridding of the energy axis in the phasemore » space of the electron distribution function. Numerical test cases are shown to demonstrate the accuracy of the method and its conservation properties.« less

  5. Does Data Distribution Change as a Function of Motor Skill Practice?

    ERIC Educational Resources Information Center

    Yan, Jin H.; Rodriguez, Ward A.; Thomas, Jerry R.

    2005-01-01

    The purpose of this study was to determine whether data distribution changes as a result of motor skill practice or learning. The data on three dependent measures (movement time; MT), percentage of movement time in primary submovement (PSB), and movement jerk (JEK) were collected at baseline and practice Blocks 1 to 5. Sixty 6-year-olds,…

  6. Functional models for colloid retention in porous media at the triple line.

    PubMed

    Dathe, Annette; Zevi, Yuniati; Richards, Brian K; Gao, Bin; Parlange, J-Yves; Steenhuis, Tammo S

    2014-01-01

    Spectral confocal microscope visualizations of microsphere movement in unsaturated porous media showed that attachment at the Air Water Solid (AWS) interface was an important retention mechanism. These visualizations can aid in resolving the functional form of retention rates of colloids at the AWS interface. In this study, soil adsorption isotherm equations were adapted by replacing the chemical concentration in the water as independent variable by the cumulative colloids passing by. In order of increasing number of fitted parameters, the functions tested were the Langmuir adsorption isotherm, the Logistic distribution, and the Weibull distribution. The functions were fitted against colloid concentrations obtained from time series of images acquired with a spectral confocal microscope for three experiments performed where either plain or carboxylated polystyrene latex microspheres were pulsed in a small flow chamber filled with cleaned quartz sand. Both moving and retained colloids were quantified over time. In fitting the models to the data, the agreement improved with increasing number of model parameters. The Weibull distribution gave overall the best fit. The logistic distribution did not fit the initial retention of microspheres well but otherwise the fit was good. The Langmuir isotherm only fitted the longest time series well. The results can be explained that initially when colloids are first introduced the rate of retention is low. Once colloids are at the AWS interface they act as anchor point for other colloids to attach and thereby increasing the retention rate as clusters form. Once the available attachment sites diminish, the retention rate decreases.

  7. From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems

    DTIC Science & Technology

    2015-03-13

    A. Lee. “A Programming Model for Time - Synchronized Distributed Real- Time Systems”. In: Proceedings of Real Time and Em- bedded Technology and Applications Symposium. 2007, pp. 259–268. ...From MetroII to Metronomy, Designing Contract-based Function-Architecture Co-simulation Framework for Timing Verification of Cyber-Physical Systems...the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data

  8. Exact probability distribution functions for Parrondo's games

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Saakian, David B.; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  9. Exact probability distribution functions for Parrondo's games.

    PubMed

    Zadourian, Rubina; Saakian, David B; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  10. Boltzmann equations for a binary one-dimensional ideal gas.

    PubMed

    Boozer, A D

    2011-09-01

    We consider a time-reversal invariant dynamical model of a binary ideal gas of N molecules in one spatial dimension. By making time-asymmetric assumptions about the behavior of the gas, we derive Boltzmann and anti-Boltzmann equations that describe the evolution of the single-molecule velocity distribution functions for an ensemble of such systems. We show that for a special class of initial states of the ensemble one can obtain an exact expression for the N-molecule velocity distribution function, and we use this expression to rigorously prove that the time-asymmetric assumptions needed to derive the Boltzmann and anti-Boltzmann equations hold in the limit of large N. Our results clarify some subtle issues regarding the origin of the time asymmetry of Boltzmann's H theorem.

  11. Interevent time distributions of human multi-level activity in a virtual world

    NASA Astrophysics Data System (ADS)

    Mryglod, O.; Fuchs, B.; Szell, M.; Holovatch, Yu.; Thurner, S.

    2015-02-01

    Studying human behavior in virtual environments provides extraordinary opportunities for a quantitative analysis of social phenomena with levels of accuracy that approach those of the natural sciences. In this paper we use records of player activities in the massive multiplayer online game Pardus over 1238 consecutive days, and analyze dynamical features of sequences of actions of players. We build on previous work where temporal structures of human actions of the same type were quantified, and provide an empirical understanding of human actions of different types. This study of multi-level human activity can be seen as a dynamic counterpart of static multiplex network analysis. We show that the interevent time distributions of actions in the Pardus universe follow highly non-trivial distribution functions, from which we extract action-type specific characteristic 'decay constants'. We discuss characteristic features of interevent time distributions, including periodic patterns on different time scales, bursty dynamics, and various functional forms on different time scales. We comment on gender differences of players in emotional actions, and find that while males and females act similarly when performing some positive actions, females are slightly faster for negative actions. We also observe effects on the age of players: more experienced players are generally faster in making decisions about engaging in and terminating enmity and friendship, respectively.

  12. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  13. Computing Wigner distributions and time correlation functions using the quantum thermal bath method: application to proton transfer spectroscopy.

    PubMed

    Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe

    2013-08-14

    Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.

  14. Semiparametric methods to contrast gap time survival functions: Application to repeat kidney transplantation.

    PubMed

    Shu, Xu; Schaubel, Douglas E

    2016-06-01

    Times between successive events (i.e., gap times) are of great importance in survival analysis. Although many methods exist for estimating covariate effects on gap times, very few existing methods allow for comparisons between gap times themselves. Motivated by the comparison of primary and repeat transplantation, our interest is specifically in contrasting the gap time survival functions and their integration (restricted mean gap time). Two major challenges in gap time analysis are non-identifiability of the marginal distributions and the existence of dependent censoring (for all but the first gap time). We use Cox regression to estimate the (conditional) survival distributions of each gap time (given the previous gap times). Combining fitted survival functions based on those models, along with multiple imputation applied to censored gap times, we then contrast the first and second gap times with respect to average survival and restricted mean lifetime. Large-sample properties are derived, with simulation studies carried out to evaluate finite-sample performance. We apply the proposed methods to kidney transplant data obtained from a national organ transplant registry. Mean 10-year graft survival of the primary transplant is significantly greater than that of the repeat transplant, by 3.9 months (p=0.023), a result that may lack clinical importance. © 2015, The International Biometric Society.

  15. Limiting Distributions of Functionals of Markov Chains.

    DTIC Science & Technology

    1984-08-01

    limiting distributions; periodic * nonhomoger.,!ous Poisson processes . 19 ANS? MACY IConuui oe nonoe’ee if necorglooy and edern thty by block numbers...homogeneous Poisson processes is of interest in itself. The problem considered in this paper is of interest in the theory of partially observable...where we obtain the limiting distribution of the interevent times. Key Words: Markov Chains, Limiting Distributions, Periodic Nonhomogeneous Poisson

  16. Spline methods for approximating quantile functions and generating random samples

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1985-01-01

    Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.

  17. Unconventional Signal Processing Using the Cone Kernel Time-Frequency Representation.

    DTIC Science & Technology

    1992-10-30

    Wigner - Ville distribution ( WVD ), the Choi- Williams distribution , and the cone kernel distribution were compared with the spectrograms. Results were...ambiguity function. Figures A-18(c) and (d) are the Wigner - Ville Distribution ( WVD ) and CK-TFR Doppler maps. In this noiseless case all three exhibit...kernel is the basis for the well known Wigner - Ville distribution . In A-9(2), the cone kernel defined by Zhao, Atlas and Marks [21 is described

  18. Development of optimal models of porous media by combining static and dynamic data: the permeability and porosity distributions.

    PubMed

    Hamzehpour, Hossein; Rasaei, M Reza; Sahimi, Muhammad

    2007-05-01

    We describe a method for the development of the optimal spatial distributions of the porosity phi and permeability k of a large-scale porous medium. The optimal distributions are constrained by static and dynamic data. The static data that we utilize are limited data for phi and k, which the method honors in the optimal model and utilizes their correlation functions in the optimization process. The dynamic data include the first-arrival (FA) times, at a number of receivers, of seismic waves that have propagated in the porous medium, and the time-dependent production rates of a fluid that flows in the medium. The method combines the simulated-annealing method with a simulator that solves numerically the three-dimensional (3D) acoustic wave equation and computes the FA times, and a second simulator that solves the 3D governing equation for the fluid's pressure as a function of time. To our knowledge, this is the first time that an optimization method has been developed to determine simultaneously the global minima of two distinct total energy functions. As a stringent test of the method's accuracy, we solve for flow of two immiscible fluids in the same porous medium, without using any data for the two-phase flow problem in the optimization process. We show that the optimal model, in addition to honoring the data, also yields accurate spatial distributions of phi and k, as well as providing accurate quantitative predictions for the single- and two-phase flow problems. The efficiency of the computations is discussed in detail.

  19. Effective equilibrium states in mixtures of active particles driven by colored noise

    NASA Astrophysics Data System (ADS)

    Wittmann, René; Brader, J. M.; Sharma, A.; Marconi, U. Marini Bettolo

    2018-01-01

    We consider the steady-state behavior of pairs of active particles having different persistence times and diffusivities. To this purpose we employ the active Ornstein-Uhlenbeck model, where the particles are driven by colored noises with exponential correlation functions whose intensities and correlation times vary from species to species. By extending Fox's theory to many components, we derive by functional calculus an approximate Fokker-Planck equation for the configurational distribution function of the system. After illustrating the predicted distribution in the solvable case of two particles interacting via a harmonic potential, we consider systems of particles repelling through inverse power-law potentials. We compare the analytic predictions to computer simulations for such soft-repulsive interactions in one dimension and show that at linear order in the persistence times the theory is satisfactory. This work provides the toolbox to qualitatively describe many-body phenomena, such as demixing and depletion, by means of effective pair potentials.

  20. On the accuracy of ERS-1 orbit predictions

    NASA Technical Reports Server (NTRS)

    Koenig, Rolf; Li, H.; Massmann, Franz-Heinrich; Raimondo, J. C.; Rajasenan, C.; Reigber, C.

    1993-01-01

    Since the launch of ERS-1, the D-PAF (German Processing and Archiving Facility) provides regularly orbit predictions for the worldwide SLR (Satellite Laser Ranging) tracking network. The weekly distributed orbital elements are so called tuned IRV's and tuned SAO-elements. The tuning procedure, designed to improve the accuracy of the recovery of the orbit at the stations, is discussed based on numerical results. This shows that tuning of elements is essential for ERS-1 with the currently applied tracking procedures. The orbital elements are updated by daily distributed time bias functions. The generation of the time bias function is explained. Problems and numerical results are presented. The time bias function increases the prediction accuracy considerably. Finally, the quality assessment of ERS-1 orbit predictions is described. The accuracy is compiled for about 250 days since launch. The average accuracy lies in the range of 50-100 ms and has considerably improved.

  1. A class of Fourier integrals based on the electric potential of an elongated dipole.

    PubMed

    Skianis, Georgios Aim

    2014-01-01

    In the present paper the closed expressions of a class of non tabulated Fourier integrals are derived. These integrals are associated with a group of functions at space domain, which represent the electric potential of a distribution of elongated dipoles which are perpendicular to a flat surface. It is shown that the Fourier integrals are produced by the Fourier transform of the Green's function of the potential of the dipole distribution, times a definite integral in which the distribution of the polarization is involved. Therefore the form of this distribution controls the expression of the Fourier integral. Introducing various dipole distributions, the respective Fourier integrals are derived. These integrals may be useful in the quantitative interpretation of electric potential anomalies produced by elongated dipole distributions, at spatial frequency domain.

  2. Detecting changes in the spatial distribution of nitrate contamination in ground water

    USGS Publications Warehouse

    Liu, Z.-J.; Hallberg, G.R.; Zimmerman, D.L.; Libra, R.D.

    1997-01-01

    Many studies of ground water pollution in general and nitrate contamination in particular have often relied on a one-time investigation, tracking of individual wells, or aggregate summaries. Studies of changes in spatial distribution of contaminants over time are lacking. This paper presents a method to compare spatial distributions for possible changes over time. The large-scale spatial distribution at a given time can be considered as a surface over the area (a trend surface). The changes in spatial distribution from period to period can be revealed by the differences in the shape and/or height of surfaces. If such a surface is described by a polynomial function, changes in surfaces can be detected by testing statistically for differences in their corresponding polynomial functions. This method was applied to nitrate concentration in a population of wells in an agricultural drainage basin in Iowa, sampled in three different years. For the period of 1981-1992, the large-scale spatial distribution of nitrate concentration did not show significant change in the shape of spatial surfaces; while the magnitude of nitrate concentration in the basin, or height of the computed surfaces showed significant fluctuations. The change in magnitude of nitrate concentration is closely related to climatic variations, especially in precipitation. The lack of change in the shape of spatial surfaces means that either the influence of land use/nitrogen management was overshadowed by climatic influence, or the changes in land use/management occurred in a random fashion.

  3. Applications of physics to economics and finance: Money, income, wealth, and the stock market

    NASA Astrophysics Data System (ADS)

    Dragulescu, Adrian Antoniu

    Several problems arising in Economics and Finance are analyzed using concepts and quantitative methods from Physics. The dissertation is organized as follows: In the first chapter it is argued that in a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money must follow the exponential Boltzmann-Gibbs law characterized by an effective temperature equal to the average amount of money per economic agent. The emergence of Boltzmann-Gibbs distribution is demonstrated through computer simulations of economic models. A thermal machine which extracts a monetary profit can be constructed between two economic systems with different temperatures. The role of debt and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold, are discussed. In the second chapter, using data from several sources, it is found that the distribution of income is described for the great majority of population by an exponential distribution, whereas the high-end tail follows a power law. From the individual income distribution, the probability distribution of income for families with two earners is derived and it is shown that it also agrees well with the data. Data on wealth is presented and it is found that the distribution of wealth has a structure similar to the distribution of income. The Lorenz curve and Gini coefficient were calculated and are shown to be in good agreement with both income and wealth data sets. In the third chapter, the stock-market fluctuations at different time scales are investigated. A model where stock-price dynamics is governed by a geometrical (multiplicative) Brownian motion with stochastic variance is proposed. The corresponding Fokker-Planck equation can be solved exactly. Integrating out the variance, an analytic formula for the time-dependent probability distribution of stock price changes (returns) is found. The formula is in excellent agreement with the Dow-Jones index for the time lags from 1 to 250 trading days. For time lags longer than the relaxation time of variance, the probability distribution can be expressed in a scaling form using a Bessel function. The Dow-Jones data follow the scaling function for seven orders of magnitude.

  4. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  5. Time-frequency analysis of backscattered signals from diffuse radar targets

    NASA Astrophysics Data System (ADS)

    Kenny, O. P.; Boashash, B.

    1993-06-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.

  6. Transition path time distributions for Lévy flights

    NASA Astrophysics Data System (ADS)

    Janakiraman, Deepika

    2018-07-01

    This paper presents a study of transition path time distributions for Lévy noise-induced barrier crossing. Transition paths are short segments of the reactive trajectories and span the barrier region of the potential without spilling into the reactant/product wells. The time taken to traverse this segment is referred to as the transition path time. Since the transition path is devoid of excursions in the minimum, the corresponding time will give the exclusive barrier crossing time, unlike . This work explores the distribution of transition path times for superdiffusive barrier crossing, analytically. This is made possible by approximating the barrier by an inverted parabola. Using this approximation, the distributions are evaluated in both over- and under-damped limits of friction. The short-time behaviour of the distributions, provide analytical evidence for single-step transition events—a feature in Lévy-barrier crossing as observed in prior simulation studies. The average transition path time is calculated as a function of the Lévy index (α), and the optimal value of α leading to minimum average transition path time is discussed, in both the limits of friction. Langevin dynamics simulations corroborating with the analytical results are also presented.

  7. Characterization of intermittency in renewal processes: Application to earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji

    2010-03-15

    We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less

  8. Observations of a free-energy source for intense electrostatic waves. [in upper atmosphere near upper hybrid resonance frequency

    NASA Technical Reports Server (NTRS)

    Kurth, W. S.; Frank, L. A.; Gurnett, D. A.; Burek, B. G.; Ashour-Abdalla, M.

    1980-01-01

    Significant progress has been made in understanding intense electrostatic waves near the upper hybrid resonance frequency in terms of the theory of multiharmonic cyclotron emission using a classical loss-cone distribution function as a model. Recent observations by Hawkeye 1 and GEOS 1 have verified the existence of loss-cone distributions in association with the intense electrostatic wave events, however, other observations by Hawkeye and ISEE have indicated that loss cones are not always observable during the wave events, and in fact other forms of free energy may also be responsible for the instability. Now, for the first time, a positively sloped feature in the perpendicular distribution function has been uniquely identified with intense electrostatic wave activity. Correspondingly, we suggest that the theory is flexible under substantial modifications of the model distribution function.

  9. The Cluster Variation Method: A Primer for Neuroscientists.

    PubMed

    Maren, Alianna J

    2016-09-30

    Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  10. The Cluster Variation Method: A Primer for Neuroscientists

    PubMed Central

    Maren, Alianna J.

    2016-01-01

    Effective Brain–Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found. PMID:27706022

  11. Age slowing down in detection and visual discrimination under varying presentation times.

    PubMed

    Moret-Tatay, Carmen; Lemus-Zúñiga, Lenin-Guillermo; Tortosa, Diana Abad; Gamermann, Daniel; Vázquez-Martínez, Andrea; Navarro-Pardo, Esperanza; Conejero, J Alberto

    2017-08-01

    The reaction time has been described as a measure of perception, decision making, and other cognitive processes. The aim of this work is to examine age-related changes in executive functions in terms of demand load under varying presentation times. Two tasks were employed where a signal detection and a discrimination task were performed by young and older university students. Furthermore, a characterization of the response time distribution by an ex-Gaussian fit was carried out. The results indicated that the older participants were slower than the younger ones in signal detection and discrimination. Moreover, the differences between both processes for the older participants were higher, and they also showed a higher distribution average except for the lower and higher presentation time. The results suggest a general slowdown in both tasks for age under different presentation times, except for the cases where presentation times were lower and higher. Moreover, if these parameters are understood to be a reflection of executive functions, these findings are consistent with the common view that age-related cognitive deficits show a decline in this function. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  12. Coordinated scheduling for dynamic real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei

    1994-01-01

    In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.

  13. Time distribution of heavy rainfall events in south west of Iran

    NASA Astrophysics Data System (ADS)

    Ghassabi, Zahra; kamali, G. Ali; Meshkatee, Amir-Hussain; Hajam, Sohrab; Javaheri, Nasrolah

    2016-07-01

    Accurate knowledge of rainfall time distribution is a fundamental issue in many Meteorological-Hydrological studies such as using the information of the surface runoff in the design of the hydraulic structures, flood control and risk management, and river engineering studies. Since the main largest dams of Iran are in the south-west of the country (i.e. South Zagros), this research investigates the temporal rainfall distribution based on an analytical numerical method to increase the accuracy of hydrological studies in Iran. The United States Soil Conservation Service (SCS) estimated the temporal rainfall distribution in various forms. Hydrology studies usually utilize the same distribution functions in other areas of the world including Iran due to the lack of sufficient observation data. However, we first used Weather Research Forecasting (WRF) model to achieve the simulated rainfall results of the selected storms on south west of Iran in this research. Then, a three-parametric Logistic function was fitted to the rainfall data in order to compute the temporal rainfall distribution. The domain of the WRF model is 30.5N-34N and 47.5E-52.5E with a resolution of 0.08 degree in latitude and longitude. We selected 35 heavy storms based on the observed rainfall data set to simulate with the WRF Model. Storm events were scrutinized independently from each other and the best analytical three-parametric logistic function was fitted for each grid point. The results show that the value of the coefficient a of the logistic function, which indicates rainfall intensity, varies from the minimum of 0.14 to the maximum of 0.7. Furthermore, the values of the coefficient B of the logistic function, which indicates rain delay of grid points from start time of rainfall, vary from 1.6 in south-west and east to more than 8 in north and central parts of the studied area. In addition, values of rainfall intensities are lower in south west of IRAN than those of observed or proposed by the SCS values in the US.

  14. Determination of Anisotropic Ion Velocity Distribution Function in Intrinsic Gas Plasma. Theory.

    NASA Astrophysics Data System (ADS)

    Mustafaev, A.; Grabovskiy, A.; Murillo, O.; Soukhomlinov, V.

    2018-02-01

    The first seven coefficients of the expansion of the energy and angular distribution functions in Legendre polynomials for Hg+ ions in Hg vapor plasma with the parameter E/P ≈ 400 V/(cm Torr) are measured for the first time using a planar one-sided probe. The analytic solution to the Boltzmann kinetic equation for ions in the plasma of their parent gas is obtained in the conditions when the resonant charge exchange is the predominant process, and ions acquire on their mean free path a velocity much higher than the characteristic velocity of thermal motion of atoms. The presence of an ambipolar field of an arbitrary strength is taken into account. It is shown that the ion velocity distribution function is determined by two parameters and differs substantially from the Maxwellian distribution. Comparison of the results of calculation of the drift velocity of He+ ions in He, Ar+ in Ar, and Hg+ in Hg with the available experimental data shows their conformity. The results of the calculation of the ion distribution function correctly describe the experimental data obtained from its measurement. Analysis of the result shows that in spite of the presence of the strong field, the ion velocity distribution functions are isotropic for ion velocities lower than the average thermal velocity of atoms. With increasing ion velocity, the distribution becomes more and more extended in the direction of the electric field.

  15. A statistical physics view of pitch fluctuations in the classical music from Bach to Chopin: evidence for scaling.

    PubMed

    Liu, Lu; Wei, Jianrong; Zhang, Huishu; Xin, Jianhong; Huang, Jiping

    2013-01-01

    Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs) and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property), but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer). The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.

  16. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy

    2018-03-01

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.

  17. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE PAGES

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...

    2018-03-06

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  18. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  19. Invariance in the recurrence of large returns and the validation of models of price dynamics

    NASA Astrophysics Data System (ADS)

    Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey

    2013-08-01

    Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.

  20. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  1. Microscopic analysis of currency and stock exchange markets.

    PubMed

    Kador, L

    1999-08-01

    Recently it was shown that distributions of short-term price fluctuations in foreign-currency exchange exhibit striking similarities to those of velocity differences in turbulent flows. Similar profiles represent the spectral-diffusion behavior of impurity molecules in disordered solids at low temperatures. It is demonstrated that a microscopic statistical theory of the spectroscopic line shapes can be applied to the other two phenomena. The theory interprets the financial data in terms of information which becomes available to the traders and their reactions as a function of time. The analysis shows that there is no characteristic time scale in financial markets, but that instead stretched-exponential or algebraic memory functions yield good agreement with the price data. For an algebraic function, the theory yields truncated Lévy distributions which are often observed in stock exchange markets.

  2. Microscopic analysis of currency and stock exchange markets

    NASA Astrophysics Data System (ADS)

    Kador, L.

    1999-08-01

    Recently it was shown that distributions of short-term price fluctuations in foreign-currency exchange exhibit striking similarities to those of velocity differences in turbulent flows. Similar profiles represent the spectral-diffusion behavior of impurity molecules in disordered solids at low temperatures. It is demonstrated that a microscopic statistical theory of the spectroscopic line shapes can be applied to the other two phenomena. The theory interprets the financial data in terms of information which becomes available to the traders and their reactions as a function of time. The analysis shows that there is no characteristic time scale in financial markets, but that instead stretched-exponential or algebraic memory functions yield good agreement with the price data. For an algebraic function, the theory yields truncated Lévy distributions which are often observed in stock exchange markets.

  3. Fraction number of trapped atoms and velocity distribution function in sub-recoil laser cooling scheme

    NASA Astrophysics Data System (ADS)

    Alekseev, V. A.; Krylova, D. D.

    1996-02-01

    The analytical investigation of Bloch equations is used to describe the main features of the 1D velocity selective coherent population trapping cooling scheme. For the initial stage of cooling the fraction of cooled atoms is derived in the case of a Gaussian initial velocity distribution. At very long times of interaction the fraction of cooled atoms and the velocity distribution function are described by simple analytical formulae and do not depend on the initial distribution. These results are in good agreement with those of Bardou, Bouchaud, Emile, Aspect and Cohen-Tannoudji based on statistical analysis in terms of Levy flights and with Monte-Carlo simulations of the process.

  4. Evidence for criticality in financial data

    NASA Astrophysics Data System (ADS)

    Ruiz, G.; de Marcos, A. F.

    2018-01-01

    We provide evidence that cumulative distributions of absolute normalized returns for the 100 American companies with the highest market capitalization, uncover a critical behavior for different time scales Δt. Such cumulative distributions, in accordance with a variety of complex - and financial - systems, can be modeled by the cumulative distribution functions of q-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These q-Gaussians are characterized by two parameters, namely ( q, β), that are uniquely defined by Δt. From these dependencies, we find a monotonic relationship between q and β, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.

  5. Solutions to an advanced functional partial differential equation of the pantograph type

    PubMed Central

    Zaidi, Ali A.; Van Brunt, B.; Wake, G. C.

    2015-01-01

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained. PMID:26345391

  6. Solutions to an advanced functional partial differential equation of the pantograph type.

    PubMed

    Zaidi, Ali A; Van Brunt, B; Wake, G C

    2015-07-08

    A model for cells structured by size undergoing growth and division leads to an initial boundary value problem that involves a first-order linear partial differential equation with a functional term. Here, size can be interpreted as DNA content or mass. It has been observed experimentally and shown analytically that solutions for arbitrary initial cell distributions are asymptotic as time goes to infinity to a certain solution called the steady size distribution. The full solution to the problem for arbitrary initial distributions, however, is elusive owing to the presence of the functional term and the paucity of solution techniques for such problems. In this paper, we derive a solution to the problem for arbitrary initial cell distributions. The method employed exploits the hyperbolic character of the underlying differential operator, and the advanced nature of the functional argument to reduce the problem to a sequence of simple Cauchy problems. The existence of solutions for arbitrary initial distributions is established along with uniqueness. The asymptotic relationship with the steady size distribution is established, and because the solution is known explicitly, higher-order terms in the asymptotics can be readily obtained.

  7. Property Values Associated with the Failure of Individual Links in a System with Multiple Weak and Strong Links.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations are developed and illustrated for the distribution of link property values at the time of link failure in the presence of aleatory uncertainty in link properties. The following topics are considered: (i) defining properties for weak links and strong links, (ii) cumulative distribution functions (CDFs) for link failure time, (iii) integral-based derivation of CDFs for link property at time of link failure, (iv) sampling-based approximation of CDFs for link property at time of link failure, (v) verification of integral-based and sampling-based determinations of CDFs for link property at time of link failure, (vi) distributions of link properties conditional onmore » time of link failure, and (vii) equivalence of two different integral-based derivations of CDFs for link property at time of link failure.« less

  8. Software Modules for the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Veregge, John R.; Gao, Jay L.; Clare, Loren P.; Mills, David

    2012-01-01

    The Proximity-1 Space Link Interleaved Time Synchronization (PITS) protocol provides time distribution and synchronization services for space systems. A software prototype implementation of the PITS algorithm has been developed that also provides the test harness to evaluate the key functionalities of PITS with simulated data source and sink. PITS integrates time synchronization functionality into the link layer of the CCSDS Proximity-1 Space Link Protocol. The software prototype implements the network packet format, data structures, and transmit- and receive-timestamp function for a time server and a client. The software also simulates the transmit and receive-time stamp exchanges via UDP (User Datagram Protocol) socket between a time server and a time client, and produces relative time offsets and delay estimates.

  9. Thermomechanical Fractional Model of TEMHD Rotational Flow

    PubMed Central

    Hamza, F.; Abd El-Latief, A.; Khatan, W.

    2017-01-01

    In this work, the fractional mathematical model of an unsteady rotational flow of Xanthan gum (XG) between two cylinders in the presence of a transverse magnetic field has been studied. This model consists of two fractional parameters α and β representing thermomechanical effects. The Laplace transform is used to obtain the numerical solutions. The fractional parameter influence has been discussed graphically for the functions field distribution (temperature, velocity, stress and electric current distributions). The relationship between the rotation of both cylinders and the fractional parameters has been discussed on the functions field distribution for small and large values of time. PMID:28045941

  10. Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyang; Liu, Tianyou

    2009-07-01

    Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.

  11. Extinction-sedimentation inversion technique for measuring size distribution of artificial fogs

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Vaughan, O. H.

    1978-01-01

    In measuring the size distribution of artificial fog particles, it is important that the natural state of the particles not be disturbed by the measuring device, such as occurs when samples are drawn through tubes. This paper describes a method for carrying out such a measurement by allowing the fog particles to settle in quiet air inside an enclosure through which traverses a parallel beam of light for measuring the optical depth as a function of time. An analytic function fit to the optical depth time decay curve can be directly inverted to yield the size distribution. Results of one such experiment performed on artificial fogs are shown as an example. The forwardscattering corrections to the measured extinction coefficient are also discussed with the aim of optimizing the experimental design so that the error due to forwardscattering is minimized.

  12. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  13. Turbulent transport with intermittency: Expectation of a scalar concentration.

    PubMed

    Rast, Mark Peter; Pinton, Jean-François; Mininni, Pablo D

    2016-04-01

    Scalar transport by turbulent flows is best described in terms of Lagrangian parcel motions. Here we measure the Eulerian distance travel along Lagrangian trajectories in a simple point vortex flow to determine the probabilistic impulse response function for scalar transport in the absence of molecular diffusion. As expected, the mean squared Eulerian displacement scales ballistically at very short times and diffusively for very long times, with the displacement distribution at any given time approximating that of a random walk. However, significant deviations in the displacement distributions from Rayleigh are found. The probability of long distance transport is reduced over inertial range time scales due to spatial and temporal intermittency. This can be modeled as a series of trapping events with durations uniformly distributed below the Eulerian integral time scale. The probability of long distance transport is, on the other hand, enhanced beyond that of the random walk for both times shorter than the Lagrangian integral time and times longer than the Eulerian integral time. The very short-time enhancement reflects the underlying Lagrangian velocity distribution, while that at very long times results from the spatial and temporal variation of the flow at the largest scales. The probabilistic impulse response function, and with it the expectation value of the scalar concentration at any point in space and time, can be modeled using only the evolution of the lowest spatial wave number modes (the mean and the lowest harmonic) and an eddy based constrained random walk that captures the essential velocity phase relations associated with advection by vortex motions. Preliminary examination of Lagrangian tracers in three-dimensional homogeneous isotropic turbulence suggests that transport in that setting can be similarly modeled.

  14. Using special functions to model the propagation of airborne diseases

    NASA Astrophysics Data System (ADS)

    Bolaños, Daniela

    2014-06-01

    Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  16. A High-Speed, Real-Time Visualization and State Estimation Platform for Monitoring and Control of Electric Distribution Systems: Implementation and Field Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta

    Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less

  17. Two Person Zero-Sum Semi-Markov Games with Unknown Holding Times Distribution on One Side: A Discounted Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando

    This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.

  18. Improved work zone design guidelines and enhanced model of travel delays in work zones : Phase I, portability and scalability of interarrival and service time probability distribution functions for different locations in Ohio and the establishment of impr

    DOT National Transportation Integrated Search

    2006-01-01

    The project focuses on two major issues - the improvement of current work zone design practices and an analysis of : vehicle interarrival time (IAT) and speed distributions for the development of a digital computer simulation model for : queues and t...

  19. Earthquakes: Recurrence and Interoccurrence Times

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.

    2008-04-01

    The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.

  20. On the impact of neutron star binaries' natal-kick distribution on the Galactic r-process enrichment

    NASA Astrophysics Data System (ADS)

    Safarzadeh, Mohammadtaher; Côté, Benoit

    2017-11-01

    We study the impact of the neutron star binaries' (NSBs) natal-kick distribution on the galactic r-process enrichment. We model the growth of a Milky Way type halo based on N-body simulation results and its star formation history based on multi-epoch abundance matching techniques. We consider that the NSBs that merge well beyond the galaxy's effective radius (>2 × Reff) do not contribute to the galactic r-process enrichment. Assuming a power-law delay-time distribution (DTD) function (∝t-1) with tmin = 30 Myr for binaries' coalescence time-scales and an exponential profile for their natal-kick distribution with an average value of 180 km s-1, we show that up to ˜ 40 per cent of all formed NSBs do not contribute to the r-process enrichment by z = 0, either because they merge far from the galaxy at a given redshift (up to ˜ 25 per cent) or have not yet merged by today (˜ 15 per cent). Our result is largely insensitive to the details of the DTD function. Assuming a constant coalescence time-scale of 100 Myr well approximates the adopted DTD although with 30 per cent of the NSBs ending up not contributing to the r-process enrichment. Our results, although rather dependent on the adopted natal-kick distribution, represent the first step towards estimating the impact of natal kicks and DTD functions on the r-process enrichment of galaxies that would need to be incorporated in the hydrodynamical simulations.

  1. Two-sided Topp-Leone Weibull distribution

    NASA Astrophysics Data System (ADS)

    Podeang, Krittaya; Bodhisuwan, Winai

    2017-11-01

    In this paper, we introduce a general class of lifetime distributions, called the two-sided Topp-Leone generated family of distribution. A special case of new family is the two-sided Topp-Leone Weibull distribution. This distribution used the two-sided Topp-Leone distribution as a generator for the Weibull distribution. The two-sided Topp-Leone Weibull distribution is presented in several shapes of distributions such as decreasing, unimodal, and bimodal which make this distribution more than flexible than the Weibull distribution. Its quantile function is presented. The parameter estimation method by using maximum likelihood estimation is discussed. The proposed distribution is applied to the strength data set, remission times of bladder cancer patients data set and time to failure of turbocharger data set. We compare the proposed distribution to the Topp-Leone Generated Weibull distribution. In conclusion, the two-sided Topp-Leone Weibull distribution performs similarly as the Topp-Leone Generated Weibull distribution in the first and second data sets. However, the proposed distribution can perform better than fit to Topp-Leone Generated Weibull distribution for the other.

  2. Spectral analysis of pair-correlation bandwidth: application to cell biology images.

    PubMed

    Binder, Benjamin J; Simpson, Matthew J

    2015-02-01

    Images from cell biology experiments often indicate the presence of cell clustering, which can provide insight into the mechanisms driving the collective cell behaviour. Pair-correlation functions provide quantitative information about the presence, or absence, of clustering in a spatial distribution of cells. This is because the pair-correlation function describes the ratio of the abundance of pairs of cells, separated by a particular distance, relative to a randomly distributed reference population. Pair-correlation functions are often presented as a kernel density estimate where the frequency of pairs of objects are grouped using a particular bandwidth (or bin width), Δ>0. The choice of bandwidth has a dramatic impact: choosing Δ too large produces a pair-correlation function that contains insufficient information, whereas choosing Δ too small produces a pair-correlation signal dominated by fluctuations. Presently, there is little guidance available regarding how to make an objective choice of Δ. We present a new technique to choose Δ by analysing the power spectrum of the discrete Fourier transform of the pair-correlation function. Using synthetic simulation data, we confirm that our approach allows us to objectively choose Δ such that the appropriately binned pair-correlation function captures known features in uniform and clustered synthetic images. We also apply our technique to images from two different cell biology assays. The first assay corresponds to an approximately uniform distribution of cells, while the second assay involves a time series of images of a cell population which forms aggregates over time. The appropriately binned pair-correlation function allows us to make quantitative inferences about the average aggregate size, as well as quantifying how the average aggregate size changes with time.

  3. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  4. Catchment virtual observatory for sharing flow and transport models outputs: using residence time distribution to compare contrasting catchments

    NASA Astrophysics Data System (ADS)

    Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald

    2017-04-01

    The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison

  5. Variability of daily UV index in Jokioinen, Finland, in 1995-2015

    NASA Astrophysics Data System (ADS)

    Heikkilä, A.; Uusitalo, K.; Kärhä, P.; Vaskuri, A.; Lakkala, K.; Koskela, T.

    2017-02-01

    UV Index is a measure for UV radiation harmful for the human skin, developed and used to promote the sun awareness and protection of people. Monitoring programs conducted around the world have produced a number of long-term time series of UV irradiance. One of the longest time series of solar spectral UV irradiance in Europe has been obtained from the continuous measurements of Brewer #107 spectrophotometer in Jokioinen (lat. 60°44'N, lon. 23°30'E), Finland, over the years 1995-2015. We have used descriptive statistics and estimates of cumulative distribution functions, quantiles and probability density functions in the analysis of the time series of daily UV Index maxima. Seasonal differences in the estimated distributions and in the trends of the estimated quantiles are found.

  6. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrie, Michael; Shadwick, B. A.

    2016-01-04

    Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  7. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  8. Network discovery with DCM

    PubMed Central

    Friston, Karl J.; Li, Baojuan; Daunizeau, Jean; Stephan, Klaas E.

    2011-01-01

    This paper is about inferring or discovering the functional architecture of distributed systems using Dynamic Causal Modelling (DCM). We describe a scheme that recovers the (dynamic) Bayesian dependency graph (connections in a network) using observed network activity. This network discovery uses Bayesian model selection to identify the sparsity structure (absence of edges or connections) in a graph that best explains observed time-series. The implicit adjacency matrix specifies the form of the network (e.g., cyclic or acyclic) and its graph-theoretical attributes (e.g., degree distribution). The scheme is illustrated using functional magnetic resonance imaging (fMRI) time series to discover functional brain networks. Crucially, it can be applied to experimentally evoked responses (activation studies) or endogenous activity in task-free (resting state) fMRI studies. Unlike conventional approaches to network discovery, DCM permits the analysis of directed and cyclic graphs. Furthermore, it eschews (implausible) Markovian assumptions about the serial independence of random fluctuations. The scheme furnishes a network description of distributed activity in the brain that is optimal in the sense of having the greatest conditional probability, relative to other networks. The networks are characterised in terms of their connectivity or adjacency matrices and conditional distributions over the directed (and reciprocal) effective connectivity between connected nodes or regions. We envisage that this approach will provide a useful complement to current analyses of functional connectivity for both activation and resting-state studies. PMID:21182971

  9. Distributed traffic signal control using fuzzy logic

    NASA Technical Reports Server (NTRS)

    Chiu, Stephen

    1992-01-01

    We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.

  10. Total Water-Vapor Distribution in the Summer Cloudless Atmosphere over the South of Western Siberia

    NASA Astrophysics Data System (ADS)

    Troshkin, D. N.; Bezuglova, N. N.; Kabanov, M. V.; Pavlov, V. E.; Sokolov, K. I.; Sukovatov, K. Yu.

    2017-12-01

    The spatial distribution of the total water vapor in different climatic zones of the south of Western Siberia in summer of 2008-2011 is studied on the basis of Envisat data. The correlation analysis of the water-vapor time series from the Envisat data W and radiosonde observations w for the territory of Omsk aerological station show that the absolute values of W and w are linearly correlated with a coefficient of 0.77 (significance level p < 0.05). The distribution functions of the total water vapor are calculated based on the number of its measurements by Envisat for a cloudless sky of three zones with different physical properties of the underlying surface, in particular, steppes to the south of the Vasyugan Swamp and forests to the northeast of the Swamp. The distribution functions are bimodal; each mode follows the lognormal law. The parameters of these functions are given.

  11. The influence of non-Gaussian distribution functions on the time-dependent perpendicular transport of energetic particles

    NASA Astrophysics Data System (ADS)

    Lasuik, J.; Shalchi, A.

    2018-06-01

    In the current paper we explore the influence of the assumed particle statistics on the transport of energetic particles across a mean magnetic field. In previous work the assumption of a Gaussian distribution function was standard, although there have been known cases for which the transport is non-Gaussian. In the present work we combine a kappa distribution with the ordinary differential equation provided by the so-called unified non-linear transport theory. We then compute running perpendicular diffusion coefficients for different values of κ and turbulence configurations. We show that changing the parameter κ slightly increases or decreases the perpendicular diffusion coefficient depending on the considered turbulence configuration. Since these changes are small, we conclude that the assumed statistics is less significant in particle transport theory. The results obtained in the current paper support to use a Gaussian distribution function as usually done in particle transport theory.

  12. Using StorAge Selection Functions to Improve Simulation of Groundwater Nitrate Lag Times in the SWAT Modeling Framework.

    NASA Astrophysics Data System (ADS)

    Wilusz, D. C.; Fuka, D.; Cho, C.; Ball, W. P.; Easton, Z. M.; Harman, C. J.

    2017-12-01

    Intensive agriculture and atmospheric deposition have dramatically increased the input of reactive nitrogen into many watersheds worldwide. Reactive nitrogen can leach as nitrate into groundwater, which is stored and eventually released over years to decades into surface waters, potentially degrading water quality. To simulate the fate and transport of groundwater nitrate, many researchers and practitioners use the Soil and Water Assessment Tool (SWAT) or an enhanced version of SWAT that accounts for topographically-driven variable source areas (TopoSWAT). Both SWAT and TopoSWAT effectively assume that nitrate in the groundwater reservoir is well-mixed, which is known to be a poor assumption at many sites. In this study, we describe modifications to TopoSWAT that (1) relax the assumption of groundwater well-mixedness, (2) more flexibly parameterize groundwater transport as a time-varying distribution of travel times using the recently developed theory of rank StorAge Selection (rSAS) functions, and (3) allow for groundwater age to be represented by position on the hillslope or hydrological distance from the stream. The approach conceptualizes the groundwater aquifer as a population of water parcels entering as recharge with a particular nitrate concentration, aging as they move through storage, and eventually exiting as baseflow. The rSAS function selects the distribution of parcel ages that exit as baseflow based on a parameterized probability distribution; this distribution can be adjusted to preferentially select different distributions of young and old parcels in storage so as to reproduce (in principle) any form of transport. The modified TopoSWAT model (TopoSWAT+rSAS) is tested at a small agricultural catchment in the Eastern Shore, MD with an extensive hydrologic and hydrochemical data record for calibration and evaluation. The results examine (1) the sensitivity of TopoSWAT+rSAS modeling of nitrate transport to assumptions about the distribution of travel times of the groundwater aquifer, (2) which travel times are most likely at our study site based on available data, and (3) how TopoSWAT+rSAS performs and can be applied to other catchments.

  13. Computed versus measured ion velocity distribution functions in a Hall effect thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrigues, L.; CNRS, LAPLACE, F-31062 Toulouse; Mazouffre, S.

    2012-06-01

    We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion meanmore » velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.« less

  14. Priority queues with bursty arrivals of incoming tasks

    NASA Astrophysics Data System (ADS)

    Masuda, N.; Kim, J. S.; Kahng, B.

    2009-03-01

    Recently increased accessibility of large-scale digital records enables one to monitor human activities such as the interevent time distributions between two consecutive visits to a web portal by a single user, two consecutive emails sent out by a user, two consecutive library loans made by a single individual, etc. Interestingly, those distributions exhibit a universal behavior, D(τ)˜τ-δ , where τ is the interevent time, and δ≃1 or 3/2 . The universal behaviors have been modeled via the waiting-time distribution of a task in the queue operating based on priority; the waiting time follows a power-law distribution Pw(τ)˜τ-α with either α=1 or 3/2 depending on the detail of queuing dynamics. In these models, the number of incoming tasks in a unit time interval has been assumed to follow a Poisson-type distribution. For an email system, however, the number of emails delivered to a mail box in a unit time we measured follows a power-law distribution with general exponent γ . For this case, we obtain analytically the exponent α , which is not necessarily 1 or 3/2 and takes nonuniversal values depending on γ . We develop the generating function formalism to obtain the exponent α , which is distinct from the continuous time approximation used in the previous studies.

  15. Method and system using power modulation and velocity modulation producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA

    2003-12-23

    A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  16. Data quantile-quantile plots: quantifying the time evolution of space climatology

    NASA Astrophysics Data System (ADS)

    Tindale, Elizabeth; Chapman, Sandra

    2017-04-01

    The solar wind is inherently variable across a wide range of spatio-temporal scales; embedded in the flow are the signatures of distinct non-linear physical processes from evolving turbulence to the dynamical solar corona. In-situ satellite observations of solar wind magnetic field and velocity are at minute and below time resolution and now extend over several solar cycles. Each solar cycle is unique, and the space climatology challenge is to quantify how solar wind variability changes within, and across, each distinct solar cycle, and how this in turn drives space weather at earth. We will demonstrate a novel statistical method, that of data-data quantile-quantile (DQQ) plots, which quantifies how the underlying statistical distribution of a given observable is changing in time. Importantly this method does not require any assumptions concerning the underlying functional form of the distribution and can identify multi-component behaviour that is changing in time. This can be used to determine when a sub-range of a given observable is undergoing a change in statistical distribution, or where the moments of the distribution only are changing and the functional form of the underlying distribution is not changing in time. The method is quite general; for this application we use data from the WIND satellite to compare the solar wind across the minima and maxima of solar cycles 23 and 24 [1], and how these changes are manifest in parameters that quantify coupling to the earth's magnetosphere. [1] Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.

  17. Cognitive load in distributed and massed practice in virtual reality mastoidectomy simulation.

    PubMed

    Andersen, Steven Arild Wuyts; Mikkelsen, Peter Trier; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten

    2016-02-01

    Cognitive load theory states that working memory is limited. This has implications for learning and suggests that reducing cognitive load (CL) could promote learning and skills acquisition. This study aims to explore the effect of repeated practice and simulator-integrated tutoring on CL in virtual reality (VR) mastoidectomy simulation. Prospective trial. Forty novice medical students performed 12 repeated virtual mastoidectomy procedures in the Visible Ear Simulator: 21 completed distributed practice with practice blocks spaced in time and 19 participants completed massed practice (all practices performed in 1 day). Participants were randomized for tutoring with the simulator-integrated tutor function. Cognitive load was estimated by measuring reaction time in a secondary task. Data were analyzed using linear mixed models for repeated measurements. The mean reaction time increased by 37% during the procedure compared with baseline, demonstrating that the procedure placed substantial cognitive demands. Repeated practice significantly lowered CL in the distributed practice group but not in massed practice group. In addition, CL was found to be further increased by 10.3% in the later and more complex stages of the procedure. The simulator-integrated tutor function did not have an impact on CL. Distributed practice decreased CL in repeated VR mastoidectomy training more consistently than was seen in massed practice. This suggests a possible effect of skills and memory consolidation occurring over time. To optimize technical skills learning, training should be organized as time-distributed practice rather than as a massed block of practice, which is common in skills-training courses. N/A. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  18. The distribution of first-passage times and durations in FOREX and future markets

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico

    2009-07-01

    Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting time. We find that our distribution is applicable as long as durations follow a Weibull law for short times and do not have too heavy a tail.

  19. Characterization of calcium and zinc spatial distributions at the fibrocartilage zone of bone-tendon junction by synchrotron radiation-based micro X-ray fluorescence analysis combined with backscattered electron imaging

    NASA Astrophysics Data System (ADS)

    Lu, Hongbin; Chen, Can; Wang, Zhanwen; Qu, Jin; Xu, Daqi; Wu, Tianding; Cao, Yong; Zhou, Jingyong; Zheng, Cheng; Hu, Jianzhong

    2015-09-01

    Tendon attaches to bone through a functionally graded fibrocartilage zone, including uncalcified fibrocartilage (UF), tidemark (TM) and calcified fibrocartilage (CF). This transition zone plays a pivotal role in relaxing load transfer between tendon and bone, and serves as a boundary between otherwise structurally and functionally distinct tissue types. Calcium and zinc are believed to play important roles in the normal growth, mineralization, and repair of the fibrocartilage zone of bone-tendon junction (BTJ). However, spatial distributions of calcium and zinc at the fibrocartilage zone of BTJ and their distribution-function relationship are not totally understood. Thus, synchrotron radiation-based micro X-ray fluorescence analysis (SR-μXRF) in combination with backscattered electron imaging (BEI) was employed to characterize the distributions of calcium and zinc at the fibrocartilage zone of rabbit patella-patellar tendon complex (PPTC). For the first time, the unique distributions of calcium and zinc at the fibrocartilage zone of the PPTC were clearly mapped by this method. The distributions of calcium and zinc at the fibrocartilage zone of the PPTC were inhomogeneous. A significant accumulation of zinc was exhibited in the transition region between UF and CF. The highest zinc content (3.17 times of that of patellar tendon) was found in the TM of fibrocartilage zone. The calcium content began to increase near the TM and increased exponentially across the calcified fibrocartilage region towards the patella. The highest calcium content (43.14 times of that of patellar tendon) was in the transitional zone of calcified fibrocartilage region and the patella, approximately 69 μm from the location with the highest zinc content. This study indicated, for the first time, that there is a differential distribution of calcium and zinc at the fibrocartilage zone of PPTC. These observations reveal new insights into region-dependent changes across the fibrocartilage zone of BTJ and will serve as critical benchmark parameters for current efforts in BTJ repair.

  20. 2-dimensional ion velocity distributions measured by laser-induced fluorescence above a radio-frequency biased silicon wafer

    NASA Astrophysics Data System (ADS)

    Moore, Nathaniel B.; Gekelman, Walter; Pribyl, Patrick; Zhang, Yiting; Kushner, Mark J.

    2013-08-01

    The dynamics of ions traversing sheaths in low temperature plasmas are important to the formation of the ion energy distribution incident onto surfaces during microelectronics fabrication. Ion dynamics have been measured using laser-induced fluorescence (LIF) in the sheath above a 30 cm diameter, 2.2 MHz-biased silicon wafer in a commercial inductively coupled plasma processing reactor. The velocity distribution of argon ions was measured at thousands of positions above and radially along the surface of the wafer by utilizing a planar laser sheet from a pulsed, tunable dye laser. Velocities were measured both parallel and perpendicular to the wafer over an energy range of 0.4-600 eV. The resulting fluorescence was recorded using a fast CCD camera, which provided resolution of 0.4 mm in space and 30 ns in time. Data were taken at eight different phases during the 2.2 MHz cycle. The ion velocity distributions (IVDs) in the sheath were found to be spatially non-uniform near the edge of the wafer and phase-dependent as a function of height. Several cm above the wafer the IVD is Maxwellian and independent of phase. Experimental results were compared with simulations. The experimental time-averaged ion energy distribution function as a function of height compare favorably with results from the computer model.

  1. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  2. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  3. Requirements analysis for a hardware, discrete-event, simulation engine accelerator

    NASA Astrophysics Data System (ADS)

    Taylor, Paul J., Jr.

    1991-12-01

    An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.

  4. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  5. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series

    PubMed Central

    Fransson, Peter

    2016-01-01

    Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176

  6. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    PubMed

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  7. Scaling and universality in heart rate variability distributions

    NASA Technical Reports Server (NTRS)

    Rosenblum, M. G.; Peng, C. K.; Mietus, J. E.; Havlin, S.; Stanley, H. E.; Goldberger, A. L.

    1998-01-01

    We find that a universal homogeneous scaling form describes the distribution of cardiac variations for a group of healthy subjects, which is stable over a wide range of time scales. However, a similar scaling function does not exist for a group with a common cardiopulmonary instability associated with sleep apnea. Subtle differences in the distributions for the day- and night-phase dynamics for healthy subjects are detected.

  8. Scaling and universality in heart rate variability distributions

    NASA Astrophysics Data System (ADS)

    Ivanov, P. Ch; Rosenblum, M. G.; Peng, C.-K.; Mietus, J. E.; Havlin, S.; Stanley, H. E.; Goldberger, A. L.

    We find that a universal homogeneous scaling form describes the distributions of cardiac variations for a group of healthy subjects, which is stable over a wide range of time scales. However, a similar scaling function does not exist for a group with a common cardiopulmonary instability associated with sleep apnea. Subtle differences in the distributions for the day- and night-phase dynamics for healthy subjects are detected.

  9. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  10. Three-dimensional analytical model for the spatial variation of the foreshock electron distribution function - Systematics and comparisons with ISEE observations

    NASA Technical Reports Server (NTRS)

    Fitzenreiter, R. J.; Scudder, J. D.; Klimas, A. J.

    1990-01-01

    A model which is consistent with the solar wind and shock surface boundary conditions for the foreshock electron distribution in the absence of wave-particle effects is formulated for an arbitrary location behind the magnetic tangent to the earth's bow shock. Variations of the gyrophase-averaged velocity distribution are compared and contrasted with in situ ISEE observations. It is found that magnetic mirroring of solar wind electrons is the most important process by which nonmonotonic reduced electron distributions in the foreshock are produced. Leakage of particles from the magnetosheath is shown to be relatively unimportant in determining reduced distributions that are nonmonotonic. The two-dimensional distribution function off the magnetic field direction is the crucial contribution in producing reduced distributions which have beams. The time scale for modification of the electron velocity distribution in velocity space can be significantly influenced by steady state spatial gradients in the background imposed by the curved shock geometry.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less

  12. Lévy flights in the presence of a point sink of finite strength

    NASA Astrophysics Data System (ADS)

    Janakiraman, Deepika

    2017-01-01

    In this paper, the absorption of a particle undergoing Lévy flight in the presence of a point sink of arbitrary strength and position is studied. The motion of such a particle is given by a modified Fokker-Planck equation whose exact solution in the Laplace domain can be described in terms of the Laplace transform of the unperturbed (absence of the sink) Green's function. This solution for the Green's function is a well-studied, generic result which applies to both fractional and usual Fokker-Planck equations alike. Using this result, the propagator and the absorption-time distribution are obtained for free Lévy flight and Lévy flight in linear and harmonic potentials in the presence of a delta function sink, and their dependence on the sink strength is analyzed. Analytical results are presented for the long-time behavior of the absorption-time distribution in all three above-mentioned potentials. Simulation results are found to corroborate closely with analytical results.

  13. Variability of the occurrence frequency of solar flares as a function of peak hard X-ray rate

    NASA Technical Reports Server (NTRS)

    Bai, T.

    1993-01-01

    We study the occurrence frequency of solar flares as a function of the hard X-ray peak count rate, using observations of the Solar Maximum Mission. The size distributions are well represented by power-law distributions with negative indices. As a better alternative to the conventional method, we devise a maximum likelihood method of determining the power-law index of the size distribution. We find that the power-law index of the size distribution changes with time and with the phase of the 154-day periodicity. The size distribution is steeper during the maximum years of solar cycle 21 (1980 and 1981) than during the declining phase (1982-1984). The size distribution, however, is flatter during the maximum phase of the 154-day periodicity than during the minimum phase. The implications of these findings are discussed.

  14. Investigation of contact pressure and influence function model for soft wheel polishing.

    PubMed

    Rao, Zhimin; Guo, Bing; Zhao, Qingliang

    2015-09-20

    The tool influence function (TIF) is critical for calculating the dwell-time map to improve form accuracy. We present the TIF for the process of computer-controlled polishing with a soft polishing wheel. In this paper, the static TIF was developed based on the Preston equation. The pressure distribution was verified by the real removal spot section profiles. According to the experiment measurements, the pressure distribution simulated by Hertz contact theory was much larger than the real contact pressure. The simulated pressure distribution, which was modeled by the Winkler elastic foundation for a soft polishing wheel, matched the real contact pressure. A series of experiments was conducted to obtain the removal spot statistical properties for validating the relationship between material removal and processing time and contact pressure and relative velocity, along with calculating the fitted parameters to establish the TIF. The developed TIF predicted the removal character for the studied soft wheel polishing.

  15. A seismological model for earthquakes induced by fluid extraction from a subsurface reservoir

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.; van Elk, J.; Doornhof, D.

    2014-12-01

    A seismological model is developed for earthquakes induced by subsurface reservoir volume changes. The approach is based on the work of Kostrov () and McGarr () linking total strain to the summed seismic moment in an earthquake catalog. We refer to the fraction of the total strain expressed as seismic moment as the strain partitioning function, α. A probability distribution for total seismic moment as a function of time is derived from an evolving earthquake catalog. The moment distribution is taken to be a Pareto Sum Distribution with confidence bounds estimated using approximations given by Zaliapin et al. (). In this way available seismic moment is expressed in terms of reservoir volume change and hence compaction in the case of a depleting reservoir. The Pareto Sum Distribution for moment and the Pareto Distribution underpinning the Gutenberg-Richter Law are sampled using Monte Carlo methods to simulate synthetic earthquake catalogs for subsequent estimation of seismic ground motion hazard. We demonstrate the method by applying it to the Groningen gas field. A compaction model for the field calibrated using various geodetic data allows reservoir strain due to gas extraction to be expressed as a function of both spatial position and time since the start of production. Fitting with a generalized logistic function gives an empirical expression for the dependence of α on reservoir compaction. Probability density maps for earthquake event locations can then be calculated from the compaction maps. Predicted seismic moment is shown to be strongly dependent on planned gas production.

  16. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  17. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  18. M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU

    NASA Astrophysics Data System (ADS)

    Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.

    2018-04-01

    Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.

  19. Dispersion of response times reveals cognitive dynamics.

    PubMed

    Holden, John G; Van Orden, Guy C; Turvey, Michael T

    2009-04-01

    Trial-to-trial variation in word-pronunciation times exhibits 1/f scaling. One explanation is that human performances are consequent on multiplicative interactions among interdependent processes-interaction dominant dynamics. This article describes simulated distributions of pronunciation times in a further test for multiplicative interactions and interdependence. Individual participant distributions of approximately 1,100 word-pronunciation times were successfully mimicked for each participant in combinations of lognormal and power-law behavior. Successful hazard function simulations generalized these results to establish interaction dominant dynamics, in contrast with component dominant dynamics, as a likely mechanism for cognitive activity. (c) 2009 APA, all rights reserved

  20. Dispersion of Response Times Reveals Cognitive Dynamics

    PubMed Central

    Holden, John G.; Van Orden, Guy C.; Turvey, Michael T.

    2013-01-01

    Trial to trial variation in word pronunciation times exhibits 1/f scaling. One explanation is that human performances are consequent on multiplicative interactions among interdependent processes – interaction dominant dynamics. This article describes simulated distributions of pronunciation times in a further test for multiplicative interactions and interdependence. Individual participant distributions of ≈1100 word pronunciation times are successfully mimicked for each participant in combinations of lognormal and power law behavior. Successful hazard function simulations generalize these results to establish interaction dominant dynamics, in contrast with component dominant dynamics, as a likely mechanism for cognitive activity. PMID:19348544

  1. Statistical properties and correlation functions for drift waves

    NASA Technical Reports Server (NTRS)

    Horton, W.

    1986-01-01

    The dissipative one-field drift wave equation is solved using the pseudospectral method to generate steady-state fluctuations. The fluctuations are analyzed in terms of space-time correlation functions and modal probability distributions. Nearly Gaussian statistics and exponential decay of the two-time correlation functions occur in the presence of electron dissipation, while in the absence of electron dissipation long-lived vortical structures occur. Formulas from renormalized, Markovianized statistical turbulence theory are given in a local approximation to interpret the dissipative turbulence.

  2. Coherent distributions for the rigid rotator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigorescu, Marius

    2016-06-15

    Coherent solutions of the classical Liouville equation for the rigid rotator are presented as positive phase-space distributions localized on the Lagrangian submanifolds of Hamilton-Jacobi theory. These solutions become Wigner-type quasiprobability distributions by a formal discretization of the left-invariant vector fields from their Fourier transform in angular momentum. The results are consistent with the usual quantization of the anisotropic rotator, but the expected value of the Hamiltonian contains a finite “zero point” energy term. It is shown that during the time when a quasiprobability distribution evolves according to the Liouville equation, the related quantum wave function should satisfy the time-dependent Schrödingermore » equation.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marekova, Elisaveta

    Series of relatively large earthquakes in different regions of the Earth are studied. The regions chooses are of a high seismic activity and has a good contemporary network for recording of the seismic events along them. The main purpose of this investigation is the attempt to describe analytically the seismic process in the space and time. We are considering the statistical distributions the distances and the times between consecutive earthquakes (so called pair analysis). Studies conducted on approximating the statistical distribution of the parameters of consecutive seismic events indicate the existence of characteristic functions that describe them best. Such amore » mathematical description allows the distributions of the examined parameters to be compared to other model distributions.« less

  4. Evolution of association between renal and liver functions while awaiting heart transplant: An application using a bivariate multiphase nonlinear mixed effects model.

    PubMed

    Rajeswaran, Jeevanantham; Blackstone, Eugene H; Barnard, John

    2018-07-01

    In many longitudinal follow-up studies, we observe more than one longitudinal outcome. Impaired renal and liver functions are indicators of poor clinical outcomes for patients who are on mechanical circulatory support and awaiting heart transplant. Hence, monitoring organ functions while waiting for heart transplant is an integral part of patient management. Longitudinal measurements of bilirubin can be used as a marker for liver function and glomerular filtration rate for renal function. We derive an approximation to evolution of association between these two organ functions using a bivariate nonlinear mixed effects model for continuous longitudinal measurements, where the two submodels are linked by a common distribution of time-dependent latent variables and a common distribution of measurement errors.

  5. Characteristics of Ion Distribution Functions in Dipolarizing FluxBundles: THEMIS Event Studies

    NASA Astrophysics Data System (ADS)

    Runov, A.; Artemyev, A.; Birn, J.; Pritchett, P. L.; Zhou, X.

    2016-12-01

    Taking advantage of multi-point observations from repeating configuration of the Time History of Events and Macroscale Interactions during Substorms (THEMIS) fleet with probe separation of 1 to 2 Earth radii (RE) along X, Y, and Z in the geocentric solar magnetospheric system (GSM), we study ion distribution functions observed by the probes during three transient dipolarization events. Comparing observations by the multiple probes, we characterize changes in the ion distribution functions with respect to geocentric distance (X), cross-tail probe separation (Y), and levels of |Bx|, which characterize the distance from the neutral sheet. We examined 2-D and 1-D cuts of the 3-D velocity distribution functions by the {Vb,Vbxv} plane. The results indicate that the velocity distribution functions observed inside the dipolarizing flux bundles (DFB) close to the magnetic equator are often perpendicularly anisotropic for velocities Vth≤v≤2Vth, where Vth is the ion thermal velocity. Ions of higher energies (v>2Vth) are isotropic. Hence, interaction of DFBs and ambient ions may result in the perpendicular anisotropy of the injecting energetic ions, which is an important factor for plasma waves and instabilities excitation and further particle acceleration in the inner magnetosphere. We also compare the observations with the results of test-particles and PIC simulations.

  6. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  7. Constraints on large- x parton distributions from new weak boson production and deep-inelastic scattering data

    DOE PAGES

    Accardi, A.; Brady, L. T.; Melnitchouk, W.; ...

    2016-06-20

    A new set of leading twist parton distribution functions, referred to as "CJ15", is presented, which take advantage of developments in the theoretical treatment of nuclear corrections as well as new data. The analysis includes for the first time data on the free neutron structure function from Jefferson Lab, and new high-precision charged lepton and W-boson asymmetry data from Fermilab, which significantly reduce the uncertainty on the d/u ratio at large values of x.

  8. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  9. Cardiac Monitor

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Under contract to Johnson Space Center, the University of Minnesota developed the concept of impedance cardiography as an alternative to thermodilution to access astronaut heart function in flight. NASA then contracted Space Labs, Inc. to construct miniature space units based on this technology. Several companies then launched their own impedance cardiography, including Renaissance Technologies, which manufactures the IQ System. The IQ System is 5 to 17 times cheaper than thermodilution, and features the signal processing technology called TFD (Time Frequency Distribution). TFD provides three- dimensional distribution of the blood circulation force signals, allowing visualization of changes in power, frequency and time.

  10. Fast-ion distributions from third harmonic ICRF heating studied with neutron emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Hellesen, C.; Gatu Johnson, M.; Andersson Sundén, E.; Conroy, S.; Ericsson, G.; Eriksson, J.; Sjöstrand, H.; Weiszflog, M.; Johnson, T.; Gorini, G.; Nocente, M.; Tardocchi, M.; Kiptily, V. G.; Pinches, S. D.; Sharapov, S. E.; EFDA Contributors, JET

    2013-11-01

    The fast-ion distribution from third harmonic ion cyclotron resonance frequency (ICRF) heating on the Joint European Torus is studied using neutron emission spectroscopy with the time-of-flight spectrometer TOFOR. The energy dependence of the fast deuteron distribution function is inferred from the measured spectrum of neutrons born in DD fusion reactions, and the inferred distribution is compared with theoretical models for ICRF heating. Good agreements between modelling and measurements are seen with clear features in the fast-ion distribution function, that are due to the finite Larmor radius of the resonating ions, replicated. Strong synergetic effects between ICRF and neutral beam injection heating were also seen. The total energy content of the fast-ion population derived from TOFOR data was in good agreement with magnetic measurements for values below 350 kJ.

  11. Magnetopause modeling - Flux transfer events and magnetosheath quasi-trapped distributions

    NASA Technical Reports Server (NTRS)

    Speiser, T. W.; Williams, D. J.

    1982-01-01

    Three-dimensional distribution functions for energetic ions are studied numerically in the magnetosphere, through the magnetopause, and in the magnetosheath using a simple one-dimensional quasi-static model and ISEE 1 magnetopause crossing data for November 10, 1977. Quasi-trapped populations in the magnetosheath observed near flux transfer events (FTEs) are investigated, and it is shown that the population in the sheath appears to sandwich the FTE distributions. These quasi-trapped distributions are due to slow, large pitch angle, outward moving particles left behind by the outward rush of the ions more field-aligned at the time the flux was opened. It is found that sheath convective flows can map along the connected flux tube without drastically changing the distribution function, and results suggest that localized tangential fields above the upper limit may exist.

  12. Probability Density Functions of Observed Rainfall in Montana

    NASA Technical Reports Server (NTRS)

    Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.

    1995-01-01

    The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.

  13. Detection of weak signals in memory thermal baths.

    PubMed

    Jiménez-Aquino, J I; Velasco, R M; Romero-Bastida, M

    2014-11-01

    The nonlinear relaxation time and the statistics of the first passage time distribution in connection with the quasideterministic approach are used to detect weak signals in the decay process of the unstable state of a Brownian particle embedded in memory thermal baths. The study is performed in the overdamped approximation of a generalized Langevin equation characterized by an exponential decay in the friction memory kernel. A detection criterion for each time scale is studied: The first one is referred to as the receiver output, which is given as a function of the nonlinear relaxation time, and the second one is related to the statistics of the first passage time distribution.

  14. Study of transionospheric signal scintillation: Quasi- particle approach

    NASA Astrophysics Data System (ADS)

    Lyle, Ruthie D.

    1998-07-01

    A quasi-particle approach is applied to study amplitude scintillation of transionospheric signals caused by Bottomside Sinusoidal (BSS) irregularities. The quasi- particle method exploits wave-particle duality, viewing the wave as a distribution of quasi-particles. This is accomplished by transforming the autocorrelation of the wave function into a Wigner distribution function, which serves as a distribution of quasi-particles in the (/vec r,/ /vec k) phase space. The quasi-particle distribution at any instant of time represents the instantaneous state of the wave. Scattering of the signal by the ionospheric irregularities is equivalent to the evolution of the quasi-particle distribution, due to the collision of the quasi-particles with objects arising from the presence of the BSS irregularities. Subsequently, the perturbed quasi-particle distribution facilitates the computation of average space time propagation properties of the wave. Thus, the scintillation index S4 is determined. Incorporation of essential BSS features in the analysis is accomplished by analytically modeling the power spectrum of the BSS irregularities measured in-situ by the low orbiting Atmosphere-E (AE - E) Satellite. The effect of BSS irregularities on transionospheric signals has been studied. The numerical results agree well with multi-satellite scintillation observations made at Huancayo Peru in close time correspondence with BSS irregularities observed by the AE - E satellite over a few nights (December 8-11, 1979). During this period, the severity of the scintillation varied from moderate to intense, S4 = 0.1-0.8.

  15. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. A simple scaling law for the equation of state and the radial distribution functions calculated by density-functional theory molecular dynamics

    NASA Astrophysics Data System (ADS)

    Danel, J.-F.; Kazandjian, L.

    2018-06-01

    It is shown that the equation of state (EOS) and the radial distribution functions obtained by density-functional theory molecular dynamics (DFT-MD) obey a simple scaling law. At given temperature, the thermodynamic properties and the radial distribution functions given by a DFT-MD simulation remain unchanged if the mole fractions of nuclei of given charge and the average volume per atom remain unchanged. A practical interest of this scaling law is to obtain an EOS table for a fluid from that already obtained for another fluid if it has the right characteristics. Another practical interest of this result is that an asymmetric mixture made up of light and heavy atoms requiring very different time steps can be replaced by a mixture of atoms of equal mass, which facilitates the exploration of the configuration space in a DFT-MD simulation. The scaling law is illustrated by numerical results.

  17. Statistical characteristics of storm interevent time, depth, and duration for eastern New Mexico, Oklahoma, and Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.; Cleveland, Theodore G.; Fang, Xing; Thompson, David B.

    2006-01-01

    The design of small runoff-control structures, from simple floodwater-detention basins to sophisticated best-management practices, requires the statistical characterization of rainfall as a basis for cost-effective, risk-mitigated, hydrologic engineering design. The U.S. Geological Survey, in cooperation with the Texas Department of Transportation, has developed a framework to estimate storm statistics including storm interevent times, distributions of storm depths, and distributions of storm durations for eastern New Mexico, Oklahoma, and Texas. The analysis is based on hourly rainfall recorded by the National Weather Service. The database contains more than 155 million hourly values from 774 stations in the study area. Seven sets of maps depicting ranges of mean storm interevent time, mean storm depth, and mean storm duration, by county, as well as tables listing each of those statistics, by county, were developed. The mean storm interevent time is used in probabilistic models to assess the frequency distribution of storms. The Poisson distribution is suggested to model the distribution of storm occurrence, and the exponential distribution is suggested to model the distribution of storm interevent times. The four-parameter kappa distribution is judged as an appropriate distribution for modeling the distribution of both storm depth and storm duration. Preference for the kappa distribution is based on interpretation of L-moment diagrams. Parameter estimates for the kappa distributions are provided. Separate dimensionless frequency curves for storm depth and duration are defined for eastern New Mexico, Oklahoma, and Texas. Dimension is restored by multiplying curve ordinates by the mean storm depth or mean storm duration to produce quantile functions of storm depth and duration. Minimum interevent time and location have slight influence on the scale and shape of the dimensionless frequency curves. Ten example problems and solutions to possible applications are provided.

  18. On the synchrotron emission in kinetic simulations of runaway electrons in magnetic confinement fusion plasmas

    NASA Astrophysics Data System (ADS)

    Carbajal, L.; del-Castillo-Negrete, D.

    2017-12-01

    Developing avoidance or mitigation strategies of runaway electrons (REs) in magnetic confinement fusion (MCF) plasmas is of crucial importance for the safe operation of ITER. In order to develop these strategies, an accurate diagnostic capability that allows good estimates of the RE distribution function in these plasmas is needed. Synchrotron radiation (SR) of RE in MCF, besides of being one of the main damping mechanisms for RE in the high energy relativistic regime, is routinely used in current MCF experiments to infer the parameters of RE energy and pitch angle distribution functions. In the present paper we address the long standing question about what are the relationships between different REs distribution functions and their corresponding synchrotron emission simultaneously including: full-orbit effects, information of the spectral and angular distribution of SR of each electron, and basic geometric optics of a camera. We study the spatial distribution of the SR on the poloidal plane, and the statistical properties of the expected value of the synchrotron spectra of REs. We observe a strong dependence of the synchrotron emission measured by the camera on the pitch angle distribution of runaways, namely we find that crescent shapes of the spatial distribution of the SR as measured by the camera relate to RE distributions with small pitch angles, while ellipse shapes relate to distributions of runaways with larger the pitch angles. A weak dependence of the synchrotron emission measured by the camera with the RE energy, value of the q-profile at the edge, and the chosen range of wavelengths is observed. Furthermore, we find that oversimplifying the angular dependence of the SR changes the shape of the synchrotron spectra, and overestimates its amplitude by approximately 20 times for avalanching runaways and by approximately 60 times for mono-energetic distributions of runaways1.

  19. Solution of a modified fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Langlands, T. A. M.

    2006-07-01

    Recently, a modified fractional diffusion equation has been proposed [I. Sokolov, J. Klafter, From diffusion to anomalous diffusion: a century after Einstein's brownian motion, Chaos 15 (2005) 026103; A.V. Chechkin, R. Gorenflo, I.M. Sokolov, V.Yu. Gonchar, Distributed order time fractional diffusion equation, Frac. Calc. Appl. Anal. 6 (3) (2003) 259279; I.M. Sokolov, A.V. Checkin, J. Klafter, Distributed-order fractional kinetics, Acta. Phys. Pol. B 35 (2004) 1323.] for describing processes that become less anomalous as time progresses by the inclusion of a second fractional time derivative acting on the diffusion term. In this letter we give the solution of the modified equation on an infinite domain. In contrast to the solution of the traditional fractional diffusion equation, the solution of the modified equation requires an infinite series of Fox functions instead of a single Fox function.

  20. Associated relaxation time and the correlation function for a tumor cell growth system subjected to color noises

    NASA Astrophysics Data System (ADS)

    Wang, Can-Jun; Wei, Qun; Mei, Dong-Cheng

    2008-03-01

    The associated relaxation time T and the normalized correlation function C(s) for a tumor cell growth system subjected to color noises are investigated. Using the Novikov theorem and Fox approach, the steady probability distribution is obtained. Based on them, the expressions of T and C(s) are derived by means of projection operator method, in which the effects of the memory kernels of the correlation function are taken into account. Performing the numerical computations, it is found: (1) With the cross-correlation intensity |λ|, the additive noise intensity α and the multiplicative noise self-correlation time τ increasing, the tumor cell numbers can be restrained; And the cross-correlation time τ, the multiplicative noise intensity D can induce the tumor cell numbers increasing; However, the additive noise self-correlation time τ cannot affect the tumor cell numbers; The relaxation time T is a stochastic resonant phenomenon, and the distribution curves exhibit a single-maximum structure with D increasing. (2) The cross-correlation strength λ weakens the related activity between two states of the tumor cell numbers at different time, and enhances the stability of the tumor cell growth system in the steady state; On the contrast, τ and τ enhance the related activity between two states at different time; However, τ has no effect on the related activity between two states at different time.

  1. One-loop gravitational wave spectrum in de Sitter spacetime

    NASA Astrophysics Data System (ADS)

    Fröb, Markus B.; Roura, Albert; Verdaguer, Enric

    2012-08-01

    The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.

  2. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  3. Long memory behavior of returns after intraday financial jumps

    NASA Astrophysics Data System (ADS)

    Behfar, Stefan Kambiz

    2016-11-01

    In this paper, characterization of intraday financial jumps and time dynamics of returns after jumps is investigated, and will be analytically and empirically shown that intraday jumps are power-law distributed with the exponent 1 < μ < 2; in addition, returns after jumps show long-memory behavior. In the theory of finance, it is important to be able to distinguish between jumps and continuous sample path price movements, and this can be achieved by introducing a statistical test via calculating sums of products of returns over small period of time. In the case of having jump, the null hypothesis for normality test is rejected; this is based on the idea that returns are composed of mixture of normally-distributed and power-law distributed data (∼ 1 /r 1 + μ). Probability of rejection of null hypothesis is a function of μ, which is equal to one for 1 < μ < 2 within large intraday sample size M. To test this idea empirically, we downloaded S&P500 index data for both periods of 1997-1998 and 2014-2015, and showed that the Complementary Cumulative Distribution Function of jump return is power-law distributed with the exponent 1 < μ < 2. There are far more jumps in 1997-1998 as compared to 2015-2016; and it represents a power law exponent in 2015-2016 greater than one in 1997-1998. Assuming that i.i.d returns generally follow Poisson distribution, if the jump is a causal factor, high returns after jumps are the effect; we show that returns caused by jump decay as power-law distribution. To test this idea empirically, we average over the time dynamics of all days; therefore the superposed time dynamics after jump represent a power-law, which indicates that there is a long memory with a power-law distribution of return after jump.

  4. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  5. Plasma Indicator Dispersion in Arteries of the Human Leg

    PubMed Central

    Bassingthwaighte, James B.

    2010-01-01

    Indicator-dilution curves were recorded from the femoral and dorsalis pedis arteries of five normal men after injections of indocyanine green into the superior vena cava or thoracic aorta. By considering the femoral curves as inputs to a mathematically linear system and the dorsalis pedis curves as outputs, transfer functions (the distribution of transit times) for the arterial segment between these sites were obtained in terms of a four-parameter model, the lagged normal density curve, over a sixfold range of flow rates. The parameters of the spread (dispersion) of 57 transfer functions were proportional to the mean transit time. The mean difference between transit time and appearance time was 0.30 t̄; the square root of the variances was 0.18 t̄. These linear relationships suggest that flow rate has no significant influence on dispersion and that, since no transition from laminar to turbulent flow was apparent, arterial flow characteristics were not significantly changed over a wide range of flow rates. The secondary implication is that the rate of spatial longitudinal spreading of indicator with distance traveled is primarily a function of the geometry of the arterial system, not of the rate of flow, and, therefore, that the spatial distribution at any instant is a function of this rate and of the distance traveled through the system. PMID:5330717

  6. Using hazard functions to assess changes in processing capacity in an attentional cuing paradigm.

    PubMed

    Wenger, Michael J; Gibson, Bradley S

    2004-08-01

    Processing capacity--defined as the relative ability to perform mental work in a unit of time--is a critical construct in cognitive psychology and is central to theories of visual attention. The unambiguous use of the construct, experimentally and theoretically, has been hindered by both conceptual confusions and the use of measures that are at best only coarsely mapped to the construct. However, more than 25 years ago, J. T. Townsend and F. G. Ashby (1978) suggested that the hazard function on the response time (RT) distribution offered a number of conceptual advantages as a measure of capacity. The present study suggests that a set of statistical techniques, well-known outside the cognitive and perceptual literatures, offers the ability to perform hypothesis tests on RT-distribution hazard functions. These techniques are introduced, and their use is illustrated in application to data from the contingent attentional capture paradigm.

  7. Diffusion of innovations dynamics, biological growth and catenary function

    NASA Astrophysics Data System (ADS)

    Guseo, Renato

    2016-12-01

    The catenary function has a well-known role in determining the shape of chains and cables supported at their ends under the force of gravity. This enables design using a specific static equilibrium over space. Its reflected version, the catenary arch, allows the construction of bridges and arches exploiting the dual equilibrium property under uniform compression. In this paper, we emphasize a further connection with well-known aggregate biological growth models over time and the related diffusion of innovation key paradigms (e.g., logistic and Bass distributions over time) that determine self-sustaining evolutionary growth dynamics in naturalistic and socio-economic contexts. Moreover, we prove that the 'local entropy function', related to a logistic distribution, is a catenary and vice versa. This special invariance may be explained, at a deeper level, through the Verlinde's conjecture on the origin of gravity as an effect of the entropic force.

  8. Bessel functions in mass action modeling of memories and remembrances

    NASA Astrophysics Data System (ADS)

    Freeman, Walter J.; Capolupo, Antonio; Kozma, Robert; Olivares del Campo, Andrés; Vitiello, Giuseppe

    2015-10-01

    Data from experimental observations of a class of neurological processes (Freeman K-sets) present functional distribution reproducing Bessel function behavior. We model such processes with couples of damped/amplified oscillators which provide time dependent representation of Bessel equation. The root loci of poles and zeros conform to solutions of K-sets. Some light is shed on the problem of filling the gap between the cellular level dynamics and the brain functional activity. Breakdown of time-reversal symmetry is related with the cortex thermodynamic features. This provides a possible mechanism to deduce lifetime of recorded memory.

  9. The source of electrostatic fluctuations in the solar-wind

    NASA Technical Reports Server (NTRS)

    Lemons, D. S.; Asbridge, J. R.; Bame, S. J.; Feldman, W. C.; Gary, S. P.; Gosling, J. T.

    1979-01-01

    Solar wind electron and ion distribution functions measured simultaneously with or close to times of intense electrostatic fluctuations are subjected to a linear Vlasov stability analysis. Although all distributions tested were found to be stable, the analysis suggests that the ion beam instability is the most likely source of the fluctuations.

  10. Nonparametric analysis of bivariate gap time with competing risks.

    PubMed

    Huang, Chiung-Yu; Wang, Chenguang; Wang, Mei-Cheng

    2016-09-01

    This article considers nonparametric methods for studying recurrent disease and death with competing risks. We first point out that comparisons based on the well-known cumulative incidence function can be confounded by different prevalence rates of the competing events, and that comparisons of the conditional distribution of the survival time given the failure event type are more relevant for investigating the prognosis of different patterns of recurrence disease. We then propose nonparametric estimators for the conditional cumulative incidence function as well as the conditional bivariate cumulative incidence function for the bivariate gap times, that is, the time to disease recurrence and the residual lifetime after recurrence. To quantify the association between the two gap times in the competing risks setting, a modified Kendall's tau statistic is proposed. The proposed estimators for the conditional bivariate cumulative incidence distribution and the association measure account for the induced dependent censoring for the second gap time. Uniform consistency and weak convergence of the proposed estimators are established. Hypothesis testing procedures for two-sample comparisons are discussed. Numerical simulation studies with practical sample sizes are conducted to evaluate the performance of the proposed nonparametric estimators and tests. An application to data from a pancreatic cancer study is presented to illustrate the methods developed in this article. © 2016, The International Biometric Society.

  11. Frequency-Range Distribution of Boulders Around Cone Crater: Relevance to Landing Site Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Clegg-Watkins, R. N.; Jolliff, B. L.; Lawrence, S. J.

    2016-01-01

    Boulders represent a landing hazard that must be addressed in the planning of future landings on the Moon. A boulder under a landing leg can contribute to deck tilt and boulders can damage spacecraft during landing. Using orbital data to characterize boulder populations at locations where landers have safely touched down (Apollo, Luna, Surveyor, and Chang'e-3 sites) is important for determining landing hazard criteria for future missions. Additionally, assessing the distribution of boulders can address broader science issues, e.g., how far craters distribute boulders and how this distribution varies as a function of crater size and age. The availability of new Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images [1] enables the use of boulder size- and range frequency distributions for a variety of purposes [2-6]. Boulders degrade over time and primarily occur around young or fresh craters that are large enough to excavate bedrock. Here we use NAC images to analyze boulder distributions around Cone crater (340 m diameter) at the Apollo 14 site. Cone crater (CC) was selected because it is the largest crater where astronaut surface photography is available for a radial traverse to the rim. Cone crater is young (approximately 29 Ma [7]) relative to the time required to break down boulders [3,8], giving us a data point for boulder range-frequency distributions (BRFDs) as a function of crater age.

  12. Ecological variation in South American geophagine cichlids arose during an early burst of adaptive morphological and functional evolution

    PubMed Central

    Arbour, Jessica Hilary; López-Fernández, Hernán

    2013-01-01

    Diversity and disparity are unequally distributed both phylogenetically and geographically. This uneven distribution may be owing to differences in diversification rates between clades resulting from processes such as adaptive radiation. We examined the rate and distribution of evolution in feeding biomechanics in the extremely diverse and continentally distributed South American geophagine cichlids. Evolutionary patterns in multivariate functional morphospace were examined using a phylomorphospace approach, disparity-through-time analyses and by comparing Brownian motion (BM) and adaptive peak evolutionary models using maximum likelihood. The most species-rich and functionally disparate clade (CAS) expanded more efficiently in morphospace and evolved more rapidly compared with both BM expectations and its sister clade (GGD). Members of the CAS clade also exhibited an early burst in functional evolution that corresponds to the development of modern ecological roles and may have been related to the colonization of a novel adaptive peak characterized by fast oral jaw mechanics. Furthermore, reduced ecological opportunity following this early burst may have restricted functional evolution in the GGD clade, which is less species-rich and more ecologically specialized. Patterns of evolution in ecologically important functional traits are consistent with a pattern of adaptive radiation within the most diverse clade of Geophagini. PMID:23740780

  13. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  14. From Maximum Entropy Models to Non-Stationarity and Irreversibility

    NASA Astrophysics Data System (ADS)

    Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar

    The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.

  15. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  16. Temperature evolution of the local order parameter in relaxor ferroelectrics (1 - x)PMN-xPZT

    NASA Astrophysics Data System (ADS)

    Gridnev, S. A.; Glazunov, A. A.; Tsotsorin, A. N.

    2005-09-01

    The temperature dependence of the local order parameter and relaxation time distribution function have been determined in (1 - x)PMN-xPZT ceramic samples via dielectric permittivity. Above the Burns temperature, the permittivity was found to follow the Currie-Weiss law, and with temperature decreasing the deviation was observed to increase. A local order parameter was calculated from the dielectric data using a modified Landau-Devonshire approach. These results are compared to the distribution function of relaxation times. It was found that a glasslike freezing of reorientable polar clusters occurs in the temperature range of diffuse relaxor transition. The evolution of the studied system to more ordered state arises from the increased PZT content.

  17. SuperGaussian distribution functions in inhomogenous plasmas

    NASA Astrophysics Data System (ADS)

    Matte, Jean-Pierre

    2008-11-01

    In plasmas heated by a narrow laser beam, the shape of the distribution function is influenced by both the absorption, which tends to give a superGaussian (DLM) distribution function [1], and the effects of heat flow, which tends to make the distribution more Maxwellian, when the hot region is considerably wider than the laser beam [2]. Thus, it is only at early times that the deformation is as strong as predicted by our uniform intensity formula [1]. A large number of electron kinetic simulations of a finite width laser beam heating a uniform density plasma were performed with the electron kinetic code FPI [1] to study the competition between these two mechanisms. In some cases, the deformation is approximately given by this formula if we average the laser intensity over the entire plasma. This may explain why distributions were more Maxwellian than expected in some experiments [3]. [0pt] [1] J.-P. Matte et al., Plasma Phys. Contr. Fusion 30, 1665 (1988) [2] S. Brunner and E. Valeo, Phys. Plasmas 9, 923 (2002) [3] S.H. Glenzer et al., Phys. Rev. Lett. 82, 97 (1999).

  18. Ion and electron Kappa distribution functions in the plasma sheet.

    NASA Astrophysics Data System (ADS)

    Moya, P. S.; Stepanova, M. V.; Espinoza, C.; Antonova, E. E.; Valdivia, J. A.

    2017-12-01

    We present a study of ion and electron flux spectra in the Earth's plasma sheet using kappa distribution functions. Satellite data from the THEMIS mission were collected for thousands of crossings through the plasma sheet, between 7 and 35 Re and during the years 2008-2009. The events were separated according to the geomagnetic activity at the time. Our results show the distribution of the kappa index and characteristic energies across the plasma sheet and its evolution with distance to Earth for quiet times and for the substorm expansion and recovery phases. For the ions, it is observed that the kappa values tend to decrease outwards and that this effect is more significant in the dusk sector, where the smallest values are found for distances beyond 15 Re. The main effect of the substorms appears as an enhancement of this behavior. The electrons show a much more homogeneous distribution in quiet times, with a mild tendency for larger kappa values at larger distances. During substorms, the kappa values tend to equalize and appear very homogenous during expansion. However, they exhibit a significant increase in the dusk sector during the recovery substorm phase. Finally, we observe that the characteristic energy of the particles during substorms increases and concentrate at distances less than 15 Re.

  19. Functionalized gold nanoparticles: a detailed in vivo multimodal microscopic brain distribution study

    NASA Astrophysics Data System (ADS)

    Sousa, Fernanda; Mandal, Subhra; Garrovo, Chiara; Astolfo, Alberto; Bonifacio, Alois; Latawiec, Diane; Menk, Ralf Hendrik; Arfelli, Fulvia; Huewel, Sabine; Legname, Giuseppe; Galla, Hans-Joachim; Krol, Silke

    2010-12-01

    In the present study, the in vivo distribution of polyelectrolyte multilayer coated gold nanoparticles is shown, starting from the living animal down to cellular level. The coating was designed with functional moieties to serve as a potential nano drug for prion disease. With near infrared time-domain imaging we followed the biodistribution in mice up to 7 days after intravenous injection of the nanoparticles. The peak concentration in the head of mice was detected between 19 and 24 h. The precise particle distribution in the brain was studied ex vivo by X-ray microtomography, confocal laser and fluorescence microscopy. We found that the particles mainly accumulate in the hippocampus, thalamus, hypothalamus, and the cerebral cortex.In the present study, the in vivo distribution of polyelectrolyte multilayer coated gold nanoparticles is shown, starting from the living animal down to cellular level. The coating was designed with functional moieties to serve as a potential nano drug for prion disease. With near infrared time-domain imaging we followed the biodistribution in mice up to 7 days after intravenous injection of the nanoparticles. The peak concentration in the head of mice was detected between 19 and 24 h. The precise particle distribution in the brain was studied ex vivo by X-ray microtomography, confocal laser and fluorescence microscopy. We found that the particles mainly accumulate in the hippocampus, thalamus, hypothalamus, and the cerebral cortex. Electronic supplementary information (ESI) available: Fig. S1-S6. See DOI: 10.1039/c0nr00345j

  20. A surface renewal model for unsteady-state mass transfer using the generalized Danckwerts age distribution function.

    PubMed

    Horvath, Isabelle R; Chatterjee, Siddharth G

    2018-05-01

    The recently derived steady-state generalized Danckwerts age distribution is extended to unsteady-state conditions. For three different wind speeds used by researchers on air-water heat exchange on the Heidelberg Aeolotron, calculations reveal that the distribution has a sharp peak during the initial moments, but flattens out and acquires a bell-shaped character with process time, with the time taken to attain a steady-state profile being a strong and inverse function of wind speed. With increasing wind speed, the age distribution narrows significantly, its skewness decreases and its peak becomes larger. The mean eddy renewal time increases linearly with process time initially but approaches a final steady-state value asymptotically, which decreases dramatically with increased wind speed. Using the distribution to analyse the transient absorption of a gas into a large body of liquid, assuming negligible gas-side mass-transfer resistance, estimates are made of the gas-absorption and dissolved-gas transfer coefficients for oxygen absorption in water at 25°C for the three different wind speeds. Under unsteady-state conditions, these two coefficients show an inverse behaviour, indicating a heightened accumulation of dissolved gas in the surface elements, especially during the initial moments of absorption. However, the two mass-transfer coefficients start merging together as the steady state is approached. Theoretical predictions of the steady-state mass-transfer coefficient or transfer velocity are in fair agreement (average absolute error of prediction = 18.1%) with some experimental measurements of the same for the nitrous oxide-water system at 20°C that were made in the Heidelberg Aeolotron.

  1. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  2. Limits of the memory coefficient in measuring correlated bursts

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Hiraoka, Takayuki

    2018-03-01

    Temporal inhomogeneities in event sequences of natural and social phenomena have been characterized in terms of interevent times and correlations between interevent times. The inhomogeneities of interevent times have been extensively studied, while the correlations between interevent times, often called correlated bursts, are far from being fully understood. For measuring the correlated bursts, two relevant approaches were suggested, i.e., memory coefficient and burst size distribution. Here a burst size denotes the number of events in a bursty train detected for a given time window. Empirical analyses have revealed that the larger memory coefficient tends to be associated with the heavier tail of the burst size distribution. In particular, empirical findings in human activities appear inconsistent, such that the memory coefficient is close to 0, while burst size distributions follow a power law. In order to comprehend these observations, by assuming the conditional independence between consecutive interevent times, we derive the analytical form of the memory coefficient as a function of parameters describing interevent time and burst size distributions. Our analytical result can explain the general tendency of the larger memory coefficient being associated with the heavier tail of burst size distribution. We also find that the apparently inconsistent observations in human activities are compatible with each other, indicating that the memory coefficient has limits to measure the correlated bursts.

  3. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  4. Middle-high latitude N2O distributions related to the arctic vortex breakup

    NASA Astrophysics Data System (ADS)

    Zhou, L. B.; Zou, H.; Gao, Y. Q.

    2006-03-01

    The relationship of N2O distributions with the Arctic vortex breakup is first analyzed with a probability distribution function (PDF) analysis. The N2O concentration shows different distributions between the early and late vortex breakup years. In the early breakup years, the N2O concentration shows low values and large dispersions after the vortex breakup, which is related to the inhomogeneity in the vertical advection in the middle and high latitude lower stratosphere. The horizontal diffusion coefficient (K,,) shows a larger value accordingly. In the late breakup years, the N2O concentration shows high values and more uniform distributions than in the early years after the vortex breakup, with a smaller vertical advection and K,, after the vortex breakup. It is found that the N2O distributions are largely affected by the Arctic vortex breakup time but the dynamically defined vortex breakup time is not the only factor.

  5. Dynamic Singularity Spectrum Distribution of Sea Clutter

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yu, Wenxian; Zhang, Shuning

    2015-12-01

    The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.

  6. Recurrence and interoccurrence behavior of self-organized complex phenomena

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.

    2007-08-01

    The sandpile, forest-fire and slider-block models are said to exhibit self-organized criticality. Associated natural phenomena include landslides, wildfires, and earthquakes. In all cases the frequency-size distributions are well approximated by power laws (fractals). Another important aspect of both the models and natural phenomena is the statistics of interval times. These statistics are particularly important for earthquakes. For earthquakes it is important to make a distinction between interoccurrence and recurrence times. Interoccurrence times are the interval times between earthquakes on all faults in a region whereas recurrence times are interval times between earthquakes on a single fault or fault segment. In many, but not all cases, interoccurrence time statistics are exponential (Poissonian) and the events occur randomly. However, the distribution of recurrence times are often Weibull to a good approximation. In this paper we study the interval statistics of slip events using a slider-block model. The behavior of this model is sensitive to the stiffness α of the system, α=kC/kL where kC is the spring constant of the connector springs and kL is the spring constant of the loader plate springs. For a soft system (small α) there are no system-wide events and interoccurrence time statistics of the larger events are Poissonian. For a stiff system (large α), system-wide events dominate the energy dissipation and the statistics of the recurrence times between these system-wide events satisfy the Weibull distribution to a good approximation. We argue that this applicability of the Weibull distribution is due to the power-law (scale invariant) behavior of the hazard function, i.e. the probability that the next event will occur at a time t0 after the last event has a power-law dependence on t0. The Weibull distribution is the only distribution that has a scale invariant hazard function. We further show that the onset of system-wide events is a well defined critical point. We find that the number of system-wide events NSWE satisfies the scaling relation NSWE ∝(α-αC)δ where αC is the critical value of the stiffness. The system-wide events represent a new phase for the slider-block system.

  7. A cross-correlation-based estimate of the galaxy luminosity function

    NASA Astrophysics Data System (ADS)

    van Daalen, Marcel P.; White, Martin

    2018-06-01

    We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.

  8. A kinetic study of solar wind electrons in the transition region from collision dominated to collisionless flow

    NASA Technical Reports Server (NTRS)

    Lie-Svendsen, O.; Leer, E.

    1995-01-01

    We have studied the evolution of the velocity distribution function of a test population of electrons in the solar corona and inner solar wind region, using a recently developed kinetic model. The model solves the time dependent, linear transport equation, with a Fokker-Planck collision operator to describe Coulomb collisions between the 'test population' and a thermal background of charged particles, using a finite differencing scheme. The model provides information on how non-Maxwellian features develop in the distribution function in the transition region from collision dominated to collisionless flow. By taking moments of the distribution the evolution of higher order moments, such as the heat flow, can be studied.

  9. Structure and Dynamics of Hydroxyl-Functionalized Protic Ammonium Carboxylate Ionic Liquids.

    PubMed

    Thummuru, Dhileep Nagi Reddy; Mallik, Bhabani S

    2017-10-26

    We performed classical molecular dynamics simulations to investigate the structure and dynamics of protic ionic liquids, 2-hydroxy ethylammonium acetate, ethylammonium hydroxyacetate, and 2-hydroxyethylammonium hydroxyacetate at ambient conditions. Structural properties such as density, radial distribution functions, spatial distribution functions, and structure factors have been calculated. Dynamic properties such as mean square displacements, as well as residence and hydrogen bond dynamics have also been calculated. Hydrogen bond lifetimes and residence times change with the addition of hydroxyl groups. We observe that when a hydroxyl group is present on the cation, dynamics become very slow and it forms a strong hydrogen bond with carboxylate oxygen atoms of the anion. The hydroxyl functionalized ILs show more dynamic diversity than structurally similar ILs.

  10. ON CONTINUOUS-REVIEW (S-1,S) INVENTORY POLICIES WITH STATE-DEPENDENT LEADTIMES,

    DTIC Science & Technology

    INVENTORY CONTROL, *REPLACEMENT THEORY), MATHEMATICAL MODELS, LEAD TIME , MANAGEMENT ENGINEERING, DISTRIBUTION FUNCTIONS, PROBABILITY, QUEUEING THEORY, COSTS, OPTIMIZATION, STATISTICAL PROCESSES, DIFFERENCE EQUATIONS

  11. A method for computing ion energy distributions for multifrequency capacitive discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Alan C. F.; Lieberman, M. A.; Verboncoeur, J. P.

    2007-03-01

    The ion energy distribution (IED) at a surface is an important parameter for processing in multiple radio frequency driven capacitive discharges. An analytical model is developed for the IED in a low pressure discharge based on a linear transfer function that relates the time-varying sheath voltage to the time-varying ion energy response at the surface. This model is in good agreement with particle-in-cell simulations over a wide range of single, dual, and triple frequency driven capacitive discharge excitations.

  12. Intercommunications in Real Time, Redundant, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Zanger, H.

    1980-01-01

    An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.

  13. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  14. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  15. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  16. Asymptotic theory of time varying networks with burstiness and heterogeneous activation patterns

    NASA Astrophysics Data System (ADS)

    Burioni, Raffaella; Ubaldi, Enrico; Vezzani, Alessandro

    2017-05-01

    The recent availability of large-scale, time-resolved and high quality digital datasets has allowed for a deeper understanding of the structure and properties of many real-world networks. The empirical evidence of a temporal dimension prompted the switch of paradigm from a static representation of networks to a time varying one. In this work we briefly review the framework of time-varying-networks in real world social systems, especially focusing on the activity-driven paradigm. We develop a framework that allows for the encoding of three generative mechanisms that seem to play a central role in the social networks’ evolution: the individual’s propensity to engage in social interactions, its strategy in allocate these interactions among its alters and the burstiness of interactions amongst social actors. The functional forms and probability distributions encoding these mechanisms are typically data driven. A natural question arises if different classes of strategies and burstiness distributions, with different local scale behavior and analogous asymptotics can lead to the same long time and large scale structure of the evolving networks. We consider the problem in its full generality, by investigating and solving the system dynamics in the asymptotic limit, for general classes of ties allocation mechanisms and waiting time probability distributions. We show that the asymptotic network evolution is driven by a few characteristics of these functional forms, that can be extracted from direct measurements on large datasets.

  17. Single-mode fiber systems for deep space communication network

    NASA Technical Reports Server (NTRS)

    Lutes, G.

    1982-01-01

    The present investigation is concerned with the development of single-mode optical fiber distribution systems. It is pointed out that single-mode fibers represent potentially a superior medium for the distribution of frequency and timing reference signals and wideband (400 MHz) IF signals. In this connection, single-mode fibers have the potential to improve the capability and precision of NASA's Deep Space Network (DSN). Attention is given to problems related to precise time synchronization throughout the DSN, questions regarding the selection of a transmission medium, and the function of the distribution systems, taking into account specific improvements possible by an employment of single-mode fibers.

  18. Absorption and distribution of orally administered jojoba wax in mice.

    PubMed

    Yaron, A; Samoiloff, V; Benzioni, A

    1982-03-01

    The liquid wax obtained from the seeds of the arid-land shrub jojoba (Simmondsia chinensis) is finding increasing use in skin treatment preparations. The fate of this wax upon reaching the digestive tract was studied. 14C-Labeled wax was administered intragastrically to mice, and the distribution of the label in the body was determined as a function of time. Most of the wax was excreted, but a small amount was absorbed, as was indicated by the distribution of label in the internal organs and the epididymal fat. The label was incorporated into the body lipids and was found to diminish with time.

  19. A renewal jump-diffusion process with threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Li, Bo; Wu, Rong; Song, Min

    2009-06-01

    In this paper, we consider a jump-diffusion risk process with the threshold dividend strategy. Both the distributions of the inter-arrival times and the claims are assumed to be in the class of phase-type distributions. The expected discounted dividend function and the Laplace transform of the ruin time are discussed. Motivated by Asmussen [S. Asmussen, Stationary distributions for fluid flow models with or without Brownian noise, Stochastic Models 11 (1) (1995) 21-49], instead of studying the original process, we study the constructed fluid flow process and their closed-form formulas are obtained in terms of matrix expression. Finally, numerical results are provided to illustrate the computation.

  20. Measuring skew in average surface roughness as a function of surface preparation

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  1. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Tan, Swie-In [San Jose, CA; Reiss, Ira [New City, NY

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  2. Portraits of Principal Practice: Time Allocation and School Principal Work

    ERIC Educational Resources Information Center

    Sebastian, James; Camburn, Eric M.; Spillane, James P.

    2018-01-01

    Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…

  3. Raindrop intervalometer

    NASA Astrophysics Data System (ADS)

    van de Giesen, Nicolaas; Hut, Rolf; ten Veldhuis, Marie-claire

    2017-04-01

    If one can assume that drop size distributions can be effectively described by a generalized gamma function [1], one can estimate this function on the basis of the distribution of time intervals between drops hitting a certain area. The arrival of a single drop is relatively easy to measure with simple consumer devices such as cameras or piezoelectric elements. Here we present an open-hardware design for the electronics and statistical processing of an intervalometer that measures time intervals between drop arrivals. The specific hardware in this case is a piezoelectric element in an appropriate housing, combined with an instrumentation op-amp and an Arduino processor. Although it would not be too difficult to simply register the arrival times of all drops, it is more practical to only report the main statistics. For this purpose, all intervals below a certain threshold during a reporting interval are summed and counted. We also sum the scaled squares, cubes, and fourth powers of the intervals. On the basis of the first four moments, one can estimate the corresponding generalized gamma function and obtain some sense of the accuracy of the underlying assumptions. Special attention is needed to determine the lower threshold of the drop sizes that can be measured. This minimum size often varies over the area being monitored, such as is the case for piezoelectric elements. We describe a simple method to determine these (distributed) minimal drop sizes and present a bootstrap method to make the necessary corrections. Reference [1] Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.

  4. Convergence of Asymptotic Systems of Non-autonomous Neural Network Models with Infinite Distributed Delays

    NASA Astrophysics Data System (ADS)

    Oliveira, José J.

    2017-10-01

    In this paper, we investigate the global convergence of solutions of non-autonomous Hopfield neural network models with discrete time-varying delays, infinite distributed delays, and possible unbounded coefficient functions. Instead of using Lyapunov functionals, we explore intrinsic features between the non-autonomous systems and their asymptotic systems to ensure the boundedness and global convergence of the solutions of the studied models. Our results are new and complement known results in the literature. The theoretical analysis is illustrated with some examples and numerical simulations.

  5. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.

  6. Prediction of the size distributions of methanol-ethanol clusters detected in VUV laser/time-of-flight mass spectrometry.

    PubMed

    Liu, Yi; Consta, Styliani; Shi, Yujun; Lipson, R H; Goddard, William A

    2009-06-25

    The size distributions and geometries of vapor clusters equilibrated with methanol-ethanol (Me-Et) liquid mixtures were recently studied by vacuum ultraviolet (VUV) laser time-of-flight (TOF) mass spectrometry and density functional theory (DFT) calculations (Liu, Y.; Consta, S.; Ogeer, F.; Shi, Y. J.; Lipson, R. H. Can. J. Chem. 2007, 85, 843-852). On the basis of the mass spectra recorded, it was concluded that the formation of neutral tetramers is particularly prominent. Here we develop grand canonical Monte Carlo (GCMC) and molecular dynamics (MD) frameworks to compute cluster size distributions in vapor mixtures that allow a direct comparison with experimental mass spectra. Using the all-atom optimized potential for liquid simulations (OPLS-AA) force field, we systematically examined the neutral cluster size distributions as functions of pressure and temperature. These neutral cluster distributions were then used to derive ionized cluster distributions to compare directly with the experiments. The simulations suggest that supersaturation at 12 to 16 times the equilibrium vapor pressure at 298 K or supercooling at temperature 240 to 260 K at the equilibrium vapor pressure can lead to the relatively abundant tetramer population observed in the experiments. Our simulations capture the most distinct features observed in the experimental TOF mass spectra: Et(3)H(+) at m/z = 139 in the vapor corresponding to 10:90% Me-Et liquid mixture and Me(3)H(+) at m/z = 97 in the vapors corresponding to 50:50% and 90:10% Me-Et liquid mixtures. The hybrid GCMC scheme developed in this work extends the capability of studying the size distributions of neat clusters to mixed species and provides a useful tool for studying environmentally important systems such as atmospheric aerosols.

  7. Stochastic Frontier Model Approach for Measuring Stock Market Efficiency with Different Distributions

    PubMed Central

    Hasan, Md. Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md. Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time- varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation. PMID:22629352

  8. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    PubMed

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  9. A dynamic model of the marriage market-part 1: matching algorithm based on age preference and availability.

    PubMed

    Matthews, A P; Garenne, M L

    2013-09-01

    The matching algorithm in a dynamic marriage market model is described in this first of two companion papers. Iterative Proportional Fitting is used to find a marriage function (an age distribution of new marriages for both sexes), in a stable reference population, that is consistent with the one-sex age distributions of new marriages, and includes age preference. The one-sex age distributions (which are the marginals of the two-sex distribution) are based on the Picrate model, and age preference on a normal distribution, both of which may be adjusted by choice of parameter values. For a population that is perturbed from the reference state, the total number of new marriages is found as the harmonic mean of target totals for men and women obtained by applying reference population marriage rates to the perturbed population. The marriage function uses the age preference function, assumed to be the same for the reference and the perturbed populations, to distribute the total number of new marriages. The marriage function also has an availability factor that varies as the population changes with time, where availability depends on the supply of unmarried men and women. To simplify exposition, only first marriage is treated, and the algorithm is illustrated by application to Zambia. In the second paper, remarriage and dissolution are included. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Role of the Euclidean signature in lattice calculations of quasidistributions and other nonlocal matrix elements

    NASA Astrophysics Data System (ADS)

    Briceño, Raúl A.; Hansen, Maxwell T.; Monahan, Christopher J.

    2017-07-01

    Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate that the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Finally we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.

  11. Multielectron effects in the photoelectron momentum distribution of noble-gas atoms driven by visible-to-infrared-frequency laser pulses: A time-dependent density-functional-theory approach

    NASA Astrophysics Data System (ADS)

    Murakami, Mitsuko; Zhang, G. P.; Chu, Shih-I.

    2017-05-01

    We present the photoelectron momentum distributions (PMDs) of helium, neon, and argon atoms driven by a linearly polarized, visible (527-nm) or near-infrared (800-nm) laser pulse (20 optical cycles in duration) based on the time-dependent density-functional theory (TDDFT) under the local-density approximation with a self-interaction correction. A set of time-dependent Kohn-Sham equations for all electrons in an atom is numerically solved using the generalized pseudospectral method. An effect of the electron-electron interaction driven by a visible laser field is not recognizable in the helium and neon PMDs except for a reduction of the overall photoelectron yield, but there is a clear difference between the PMDs of an argon atom calculated with the frozen-core approximation and TDDFT, indicating an interference of its M -shell wave functions during the ionization. Furthermore, we find that the PMDs of degenerate p states are well separated in intensity when driven by a near-infrared laser field, so that the single-active-electron approximation can be adopted safely.

  12. Role of the Euclidean signature in lattice calculations of quasidistributions and other nonlocal matrix elements

    DOE PAGES

    Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.

    2017-07-11

    Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.

    Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less

  14. Diffusion of active chiral particles

    NASA Astrophysics Data System (ADS)

    Sevilla, Francisco J.

    2016-12-01

    The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.

  15. Preliminary Study of 2-D Time Domain Electromagnetic (TDEM) Modeling to Analyze Subsurface Resistivity Distribution and its Application to the Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Aji Hapsoro, Cahyo; Purqon, Acep; Srigutomo, Wahyu

    2017-07-01

    2-D Time Domain Electromagnetic (TDEM) has been successfully conducted to illustrate the value of Electric field distribution under the Earth surface. Electric field compared by magnetic field is used to analyze resistivity and resistivity is one of physical properties which very important to determine the reservoir potential area of geothermal systems as one of renewable energy. In this modeling we used Time Domain Electromagnetic method because it can solve EM field interaction problem with complex geometry and to analyze transient problems. TDEM methods used to model the value of electric and magnetic fields as a function of the time combined with the function of distance and depth. The result of this modeling is Electric field intensity value which is capable to describe the structure of the Earth’s subsurface. The result of this modeling can be applied to describe the Earths subsurface resistivity values to determine the reservoir potential of geothermal systems.

  16. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    PubMed

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  17. Qualitative numerical studies of the modification of the pitch angle distribution of test particles by alfvènic wave activity

    NASA Astrophysics Data System (ADS)

    Keilbach, D.; Drews, C.; Berger, L.; Marsch, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Using a test particle approach we have investigated, how an oxygen pickup ion torus velocity distribution is modified by continuous and intermittent alfvènic waves on timescales, where the gyro trajectory of each particle can be traced.We have therefore exposed the test particles to mono frequent waves, which expanded through the whole simulation in time and space. The general behavior of the pitch angle distribution is found to be stationary and a nonlinear function of the wave frequency, amplitude and the initial angle between wave elongation and field-perpendicular particle velocity vector. The figure shows the time-averaged pitch angle distributions as a function of the Doppler shifted wave frequency (where the Doppler shift was calculated with respect to the particles initial velocity) for three different wave amplitudes (labeled in each panel). The background field is chosen to be 5 nT and the 500 test particles were initially distributed on a torus with 120° pitch angle at a solar wind velocity of 450 km/s. Each y-slice of the histogram (which has been normalized to it's respective maximum) represents an individual run of the simulation.The frequency-dependent behavior of the test particles is found to be classifiable into the regimes of very low/high frequencies and frequencies close to first order resonance. We have found, that only in the latter regime the particles interact strongly with the wave, where in the time averaged histograms a branch structure is found, which was identified as a trace of particles co-moving with the wave phase. The magnitude of pitch angle change of these particles is as well as the frequency margin, where the branch structure is found, an increasing function with the wave amplitude.We have also investigated the interaction with mono frequent intermittent waves. Exposed to such waves a torus distribution is scattered in pitch angle space, whereas the pitch angle distribution is broadened systematically over time similar to pitch angle diffusion.The framework of our simulations is a first step toward understanding wave particle interactions at the most basic level and is readily expandable to e.g. the inclusion of multiple wave frequencies, intermittent wave activity, gradients in the background magnetic field or collisions with solar wind particles.

  18. A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caflisch, R; Wang, C; Dimarco, G

    2007-10-09

    If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulombmore » collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.« less

  19. Towards a model of pion generalized parton distributions from Dyson-Schwinger equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moutarde, H.

    2015-04-10

    We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.

  20. Vertical changes in the probability distribution of downward irradiance within the near-surface ocean under sunny conditions

    NASA Astrophysics Data System (ADS)

    Gernez, Pierre; Stramski, Dariusz; Darecki, Miroslaw

    2011-07-01

    Time series measurements of fluctuations in underwater downward irradiance, Ed, within the green spectral band (532 nm) show that the probability distribution of instantaneous irradiance varies greatly as a function of depth within the near-surface ocean under sunny conditions. Because of intense light flashes caused by surface wave focusing, the near-surface probability distributions are highly skewed to the right and are heavy tailed. The coefficients of skewness and excess kurtosis at depths smaller than 1 m can exceed 3 and 20, respectively. We tested several probability models, such as lognormal, Gumbel, Fréchet, log-logistic, and Pareto, which are potentially suited to describe the highly skewed heavy-tailed distributions. We found that the models cannot approximate with consistently good accuracy the high irradiance values within the right tail of the experimental distribution where the probability of these values is less than 10%. This portion of the distribution corresponds approximately to light flashes with Ed > 1.5?, where ? is the time-averaged downward irradiance. However, the remaining part of the probability distribution covering all irradiance values smaller than the 90th percentile can be described with a reasonable accuracy (i.e., within 20%) with a lognormal model for all 86 measurements from the top 10 m of the ocean included in this analysis. As the intensity of irradiance fluctuations decreases with depth, the probability distribution tends toward a function symmetrical around the mean like the normal distribution. For the examined data set, the skewness and excess kurtosis assumed values very close to zero at a depth of about 10 m.

  1. Latitudinal Dependence of the Energy Input into the Mesosphere by High Energy Electrons

    NASA Technical Reports Server (NTRS)

    Wagner, C. U.; Nikutowski, B.; Ranta, H.

    1984-01-01

    Night-time ionspheric absorption measurements give the possibility to study the precipitation of high energy electrons into the mesosphere during and after magnetospheric storms. The uniform Finnish riometer network was used together with measurements from Kuhlungsborn and Collm (GDR) to investigate the night-time absorption as a function of latitude (L=6.5 to 2.5) and storm-time for seven storms. The common trends visible in all these events are summarized in a schematic average picture, showing the distribution of increased ionospheric absorption as a function of latitude (L value) and storm-time.

  2. Directional solidification of a planar interface in the presence of a time-dependent electric current

    NASA Technical Reports Server (NTRS)

    Brush, L. N.; Coriell, S. R.; Mcfadden, G. B.

    1990-01-01

    Directional solidification of pure materials and binary alloys with a planar crystal-metal interface in the presence of a time-dependent electric current is considered. For a variety of time-dependent currents, the temperature fields and the interface velocity as functions of time are presented for indium antimonide and bismuth and for the binary alloys germanium-gallium and tin-bismuth. For the alloys, the solid composition is calculated as a function of position. Quantitative predictions are made of the effect of an electrical pulse on the solute distribution in the solidified material.

  3. Asymptotic Distributions of Coalescence Times and Ancestral Lineage Numbers for Populations with Temporally Varying Size

    PubMed Central

    Chen, Hua; Chen, Kun

    2013-01-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n − An(t) follows a Poisson distribution, and as m → n, n(n−1)Tm/2N(0) follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference. PMID:23666939

  4. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    PubMed

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  5. Roots Revealed - Neutron imaging insight of spatial distribution, morphology, growth and function

    NASA Astrophysics Data System (ADS)

    Warren, J.; Bilheux, H.; Kang, M.; Voisin, S.; Cheng, C.; Horita, J.; Perfect, E.

    2013-05-01

    Root production, distribution and turnover are not easily measured, yet their dynamics are an essential part of understanding and modeling ecosystem response to changing environmental conditions. Root age, order, morphology and mycorrhizal associations all regulate root uptake of water and nutrients, which along with along with root distribution determines plant response to, and impact on its local environment. Our objectives were to demonstrate the ability to non-invasively monitor fine root distribution, root growth and root functionality in Zea mays L. (maize) and Panicum virgatum L. (switchgrass) seedlings using neutron imaging. Plants were propagated in aluminum chambers containing sand then placed into a high flux cold neutron beam line. Dynamics of root distribution and growth were assessed by collecting consecutive CCD radiographs through time. Root functionality was assessed by tracking individual root uptake of water (H2O) or deuterium oxide (D2O) through time. Since neutrons strongly scatter H atoms, but not D atoms, biological materials such as plants are prime candidates for neutron imaging. 2D and 3D neutron radiography readily illuminated root structure, root growth, and relative plant and soil water content. Fungal hyphae associated with the roots were also visible and appeared as dark masses since their diameter was likely several orders of magnitude less than ~100 μm resolution of the detector. The 2D pulse-chase irrigation experiments with H2O and D2O successfully allowed observation of uptake and mass flow of water within the root system. Water flux within individual roots responded differentially to foliar illumination based on internal water potential gradients, illustrating the ability to track root functionality based on root size, order and distribution within the soil. (L) neutron image of switchgrass growing in sandy soil with 100 μm diameter roots (R) 3D reconstruction of maize seedling following neutron tomography

  6. A model for AGN variability on multiple time-scales

    NASA Astrophysics Data System (ADS)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  7. Maximum entropy approach to H -theory: Statistical mechanics of hierarchical systems

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Giovani L.; Salazar, Domingos S. P.; Macêdo, A. M. S.

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem—representing the region where the measurements are made—in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017), 10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  8. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    PubMed

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  9. Velocity distribution of electrons in time-varying low-temperature plasmas: progress in theoretical procedures over the past 70 years

    NASA Astrophysics Data System (ADS)

    Makabe, Toshiaki

    2018-03-01

    A time-varying low-temperature plasma sustained by electrical powers with various kinds of fRequencies has played a key role in the historical development of new technologies, such as gas lasers, ozonizers, micro display panels, dry processing of materials, medical care, and so on, since World War II. Electrons in a time-modulated low-temperature plasma have a proper velocity spectrum, i.e. velocity distribution dependent on the microscopic quantum characteristics of the feed gas molecule and on the external field strength and the frequency. In order to solve and evaluate the time-varying velocity distribution, we have mostly two types of theoretical methods based on the classical and linear Boltzmann equations, namely, the expansion method using the orthogonal function and the procedure of non-expansional temporal evolution. Both methods have been developed discontinuously and progressively in synchronization with those technological developments. In this review, we will explore the historical development of the theoretical procedure to evaluate the electron velocity distribution in a time-varying low-temperature plasma over the past 70 years.

  10. Exploring the Dynamics of Transit Times and Subsurface Mixing in a Small Agricultural Catchment

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Heidbüchel, Ingo; Musolff, Andreas; Reinstorf, Frido; Fleckenstein, Jan H.

    2018-03-01

    The analysis of transit/residence time distributions (TTDs and RTDs) provides important insights into the dynamics of stream-water ages and subsurface mixing. These insights have significant implications for water quality. For a small agricultural catchment in central Germany, we use a 3D fully coupled surface-subsurface hydrological model to simulate water flow and perform particle tracking to determine flow paths and transit times. The TTDs of discharge, RTDs of storage and fractional StorAge Selection (fSAS) functions are computed and analyzed on daily basis for a period of 10 years. Results show strong seasonal fluctuations of the median transit time of discharge and the median residence time, with the former being strongly related to the catchment wetness. Computed fSAS functions suggest systematic shifts of the discharge selection preference over four main periods: In the wet period, the youngest water in storage is preferentially selected, and this preference shifts gradually toward older ages of stored water when the catchment transitions into the drying, dry and wetting periods. These changes are driven by distinct shifts in the dominance of deeper flow paths and fast shallow flow paths. Changes in the shape of the fSAS functions can be captured by changes in the two parameters of the approximating Beta distributions, allowing the generation of continuous fSAS functions representing the general catchment behavior. These results improve our understanding of the seasonal dynamics of TTDs and fSAS functions for a complex real-world catchment and are important for interpreting solute export to the stream in a spatially implicit manner.

  11. Single-diffractive production of dijets within the kt-factorization approach

    NASA Astrophysics Data System (ADS)

    Łuszczak, Marta; Maciuła, Rafał; Szczurek, Antoni; Babiarz, Izabela

    2017-09-01

    We discuss single-diffractive production of dijets. The cross section is calculated within the resolved Pomeron picture, for the first time in the kt-factorization approach, neglecting transverse momentum of the Pomeron. We use Kimber-Martin-Ryskin unintegrated parton (gluon, quark, antiquark) distributions in both the proton as well as in the Pomeron or subleading Reggeon. The unintegrated parton distributions are calculated based on conventional mmht2014nlo parton distribution functions in the proton and H1 Collaboration diffractive parton distribution functions used previously in the analysis of diffractive structure function and dijets at HERA. For comparison, we present results of calculations performed within the collinear-factorization approach. Our results remain those obtained in the next-to-leading-order approach. The calculation is (must be) supplemented by the so-called gap survival factor, which may, in general, depend on kinematical variables. We try to describe the existing data from Tevatron and make detailed predictions for possible LHC measurements. Several differential distributions are calculated. The E¯T, η ¯ and xp ¯ distributions are compared with the Tevatron data. A reasonable agreement is obtained for the first two distributions. The last one requires introducing a gap survival factor which depends on kinematical variables. We discuss how the phenomenological dependence on one kinematical variable may influence dependence on other variables such as E¯T and η ¯. Several distributions for the LHC are shown.

  12. The use of discontinuities and functional groups to assess relative resilience in complex systems

    USGS Publications Warehouse

    Allen, Craig R.; Gunderson, Lance; Johnson, A.R.

    2005-01-01

    It is evident when the resilience of a system has been exceeded and the system qualitatively changed. However, it is not clear how to measure resilience in a system prior to the demonstration that the capacity for resilient response has been exceeded. We argue that self-organizing human and natural systems are structured by a relatively small set of processes operating across scales in time and space. These structuring processes should generate a discontinuous distribution of structures and frequencies, where discontinuities mark the transition from one scale to another. Resilience is not driven by the identity of elements of a system, but rather by the functions those elements provide, and their distribution within and across scales. A self-organizing system that is resilient should maintain patterns of function within and across scales despite the turnover of specific elements (for example, species, cities). However, the loss of functions, or a decrease in functional representation at certain scales will decrease system resilience. It follows that some distributions of function should be more resilient than others. We propose that the determination of discontinuities, and the quantification of function both within and across scales, produce relative measures of resilience in ecological and other systems. We describe a set of methods to assess the relative resilience of a system based upon the determination of discontinuities and the quantification of the distribution of functions in relation to those discontinuities. ?? 2005 Springer Science+Business Media, Inc.

  13. [Rank distributions in community ecology from the statistical viewpoint].

    PubMed

    Maksimov, V N

    2004-01-01

    Traditional statistical methods for definition of empirical functions of abundance distribution (population, biomass, production, etc.) of species in a community are applicable for processing of multivariate data contained in the above quantitative indices of the communities. In particular, evaluation of moments of distribution suffices for convolution of the data contained in a list of species and their abundance. At the same time, the species should be ranked in the list in ascending rather than descending population and the distribution models should be analyzed on the basis of the data on abundant species only.

  14. Dynamics of a stochastic cell-to-cell HIV-1 model with distributed delay

    NASA Astrophysics Data System (ADS)

    Ji, Chunyan; Liu, Qun; Jiang, Daqing

    2018-02-01

    In this paper, we consider a stochastic cell-to-cell HIV-1 model with distributed delay. Firstly, we show that there is a global positive solution of this model before exploring its long-time behavior. Then sufficient conditions for extinction of the disease are established. Moreover, we obtain sufficient conditions for the existence of an ergodic stationary distribution of the model by constructing a suitable stochastic Lyapunov function. The stationary distribution implies that the disease is persistent in the mean. Finally, we provide some numerical examples to illustrate theoretical results.

  15. Critical thresholds for eventual extinction in randomly disturbed population growth models.

    PubMed

    Peckham, Scott D; Waymire, Edward C; De Leenheer, Patrick

    2018-02-16

    This paper considers several single species growth models featuring a carrying capacity, which are subject to random disturbances that lead to instantaneous population reduction at the disturbance times. This is motivated in part by growing concerns about the impacts of climate change. Our main goal is to understand whether or not the species can persist in the long run. We consider the discrete-time stochastic process obtained by sampling the system immediately after the disturbances, and find various thresholds for several modes of convergence of this discrete process, including thresholds for the absence or existence of a positively supported invariant distribution. These thresholds are given explicitly in terms of the intensity and frequency of the disturbances on the one hand, and the population's growth characteristics on the other. We also perform a similar threshold analysis for the original continuous-time stochastic process, and obtain a formula that allows us to express the invariant distribution for this continuous-time process in terms of the invariant distribution of the discrete-time process, and vice versa. Examples illustrate that these distributions can differ, and this sends a cautionary message to practitioners who wish to parameterize these and related models using field data. Our analysis relies heavily on a particular feature shared by all the deterministic growth models considered here, namely that their solutions exhibit an exponentially weighted averaging property between a function of the initial condition, and the same function applied to the carrying capacity. This property is due to the fact that these systems can be transformed into affine systems.

  16. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  17. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  18. Correlation functions in first-order phase transitions

    NASA Astrophysics Data System (ADS)

    Garrido, V.; Crespo, D.

    1997-09-01

    Most of the physical properties of systems underlying first-order phase transitions can be obtained from the spatial correlation functions. In this paper, we obtain expressions that allow us to calculate all the correlation functions from the droplet size distribution. Nucleation and growth kinetics is considered, and exact solutions are obtained for the case of isotropic growth by using self-similarity properties. The calculation is performed by using the particle size distribution obtained by a recently developed model (populational Kolmogorov-Johnson-Mehl-Avrami model). Since this model is less restrictive than that used in previously existing theories, the result is that the correlation functions can be obtained for any dependence of the kinetic parameters. The validity of the method is tested by comparison with the exact correlation functions, which had been obtained in the available cases by the time-cone method. Finally, the correlation functions corresponding to the microstructure developed in partitioning transformations are obtained.

  19. Estimating the Spatial Distribution of Groundwater Age Using Synoptic Surveys of Environmental Tracers in Streams

    NASA Astrophysics Data System (ADS)

    Gardner, W. P.

    2017-12-01

    A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.

  20. Coherent optical determination of the leaf angle distribution of corn

    NASA Technical Reports Server (NTRS)

    Ulaby, F. T. (Principal Investigator); Pihlman, M.

    1981-01-01

    A coherent optical technique for the diffraction analysis of an image is presented. Developments in radar remote sensing shows a need to understand plant geometry and its relationship to plant moisture, soil moisture, and the radar backscattering coefficient. A corn plant changes its leaf angle distribution, as a function of time, from a uniform distribution to one that is strongly vertical. It is shown that plant and soil moisture may have an effect on plant geometry.

  1. Calculations of heavy ion charge state distributions for nonequilibrium conditions

    NASA Technical Reports Server (NTRS)

    Luhn, A.; Hovestadt, D.

    1985-01-01

    Numerical calculations of the charge state distributions of test ions in a hot plasma under nonequilibrium conditions are presented. The mean ionic charges of heavy ions for finite residence times in an instantaneously heated plasma and for a non-Maxwellian electron distribution function are derived. The results are compared with measurements of the charge states of solar energetic particles, and it is found that neither of the two simple cases considered can explain the observations.

  2. Whole-brain activity maps reveal stereotyped, distributed networks for visuomotor behavior

    PubMed Central

    Portugues, Ruben; Feierstein, Claudia E.; Engert, Florian; Orger, Michael B.

    2014-01-01

    Summary Most behaviors, even simple innate reflexes, are mediated by circuits of neurons spanning areas throughout the brain. However, in most cases, the distribution and dynamics of firing patterns of these neurons during behavior are not known. We imaged activity, with cellular resolution, throughout the whole brains of zebrafish performing the optokinetic response. We found a sparse, broadly distributed network that has an elaborate, but ordered, pattern, with a bilaterally symmetrical organization. Activity patterns fell into distinct clusters reflecting sensory and motor processing. By correlating neuronal responses with an array of sensory and motor variables, we find that the network can be clearly divided into distinct functional modules. Comparing aligned data from multiple fish, we find that the spatiotemporal activity dynamics and functional organization are highly stereotyped across individuals. These experiments reveal, for the first time in a vertebrate, the comprehensive functional architecture of the neural circuits underlying a sensorimotor behavior. PMID:24656252

  3. Structure and dynamics of the UO(2)(2+) ion in aqueous solution: an ab initio QMCF MD study.

    PubMed

    Frick, Robert J; Hofer, Thomas S; Pribil, Andreas B; Randolf, Bernhard R; Rode, Bernd M

    2009-11-12

    A comprehensive theoretical investigation on the structure and dynamics of the UO(2)(2+) ion in aqueous solution using double-zeta HF level quantum mechanical charge field molecular dynamics is presented. The quantum mechanical region includes two full layers of hydration and is embedded in a large box of explicitly treated water to achieve a realistic environment. A number of different functions, including segmential, radial, and angular distribution functions, are employed together with tilt- and Theta-angle distribution functions to describe the complex structural properties of this ion. These data were compared to recent experimental data obtained from LAXS and EXAFS and results of various theoretical calculations. Some properties were explained with the aid of charge distribution plots for the solute. The solvent dynamics around the ion were investigated using distance plots and mean ligand residence times and the results compared to experimental and theoretical data of related ions.

  4. An exactly solvable model of polymerization

    NASA Astrophysics Data System (ADS)

    Lushnikov, A. A.

    2017-08-01

    This paper considers the evolution of a polydisperse polymerizing system comprising g1,g2 … - mers carrying ϕ1,ϕ2 … functional groups reacting with one another and binding the g-mers together. In addition, the g-mers are assumed to be added at random by one at a time with a known rate depending on their mass g and functionality ϕ . Assuming that the rate of binding of two g-mers is proportional to the product of the numbers of nonreacted functional groups the kinetic equation for the distribution of clusters (g-mers) over their mass and functionalities is formulated and then solved by applying the generating function method. In contrast to existing approaches this kinetic equation operates with the efficiencies proportional to the product of the numbers of active functional groups in the clusters rather than to the product of their masses. The evolution process is shown to reveal a phase transition: the emergence of a giant linked cluster (the gel) whose mass is comparable to the total mass of the whole polymerizing system. The time dependence of the moments of the distribution of linked components over their masses and functionalities is investigated. The polymerization process terminates by forming a residual spectrum of sol particles in addition to the gel.

  5. Kappa Distribution in a Homogeneous Medium: Adiabatic Limit of a Super-diffusive Process?

    NASA Astrophysics Data System (ADS)

    Roth, I.

    2015-12-01

    The classical statistical theory predicts that an ergodic, weakly interacting system like charged particles in the presence of electromagnetic fields, performing Brownian motions (characterized by small range deviations in phase space and short-term microscopic memory), converges into the Gibbs-Boltzmann statistics. Observation of distributions with a kappa-power-law tails in homogeneous systems contradicts this prediction and necessitates a renewed analysis of the basic axioms of the diffusion process: characteristics of the transition probability density function (pdf) for a single interaction, with a possibility of non-Markovian process and non-local interaction. The non-local, Levy walk deviation is related to the non-extensive statistical framework. Particles bouncing along (solar) magnetic field with evolving pitch angles, phases and velocities, as they interact resonantly with waves, undergo energy changes at undetermined time intervals, satisfying these postulates. The dynamic evolution of a general continuous time random walk is determined by pdf of jumps and waiting times resulting in a fractional Fokker-Planck equation with non-integer derivatives whose solution is given by a Fox H-function. The resulting procedure involves the known, although not frequently used in physics fractional calculus, while the local, Markovian process recasts the evolution into the standard Fokker-Planck equation. Solution of the fractional Fokker-Planck equation with the help of Mellin transform and evaluation of its residues at the poles of its Gamma functions results in a slowly converging sum with power laws. It is suggested that these tails form the Kappa function. Gradual vs impulsive solar electron distributions serve as prototypes of this description.

  6. Dusty Pair Plasma—Wave Propagation and Diffusive Transition of Oscillations

    NASA Astrophysics Data System (ADS)

    Atamaniuk, Barbara; Turski, Andrzej J.

    2011-11-01

    The crucial point of the paper is the relation between equilibrium distributions of plasma species and the type of propagation or diffusive transition of plasma response to a disturbance. The paper contains a unified treatment of disturbance propagation (transport) in the linearized Vlasov electron-positron and fullerene pair plasmas containing charged dust impurities, based on the space-time convolution integral equations. Electron-positron-dust/ion (e-p-d/i) plasmas are rather widespread in nature. Space-time responses of multi-component linearized Vlasov plasmas on the basis of multiple integral equations are invoked. An initial-value problem for Vlasov-Poisson/Ampère equations is reduced to the one multiple integral equation and the solution is expressed in terms of forcing function and its space-time convolution with the resolvent kernel. The forcing function is responsible for the initial disturbance and the resolvent is responsible for the equilibrium velocity distributions of plasma species. By use of resolvent equations, time-reversibility, space-reflexivity and the other symmetries are revealed. The symmetries carry on physical properties of Vlasov pair plasmas, e.g., conservation laws. Properly choosing equilibrium distributions for dusty pair plasmas, we can reduce the resolvent equation to: (i) the undamped dispersive wave equations, (ii) and diffusive transport equations of oscillations.

  7. RECONCILIATION OF WAITING TIME STATISTICS OF SOLAR FLARES OBSERVED IN HARD X-RAYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aschwanden, Markus J.; McTiernan, James M., E-mail: aschwanden@lmsal.co, E-mail: jimm@ssl.berkeley.ed

    2010-07-10

    We study the waiting time distributions of solar flares observed in hard X-rays with ISEE-3/ICE, HXRBS/SMM, WATCH/GRANAT, BATSE/CGRO, and RHESSI. Although discordant results and interpretations have been published earlier, based on relatively small ranges (<2 decades) of waiting times, we find that all observed distributions, spanning over 6 decades of waiting times ({Delta}t {approx} 10{sup -3}-10{sup 3} hr), can be reconciled with a single distribution function, N({Delta}t) {proportional_to} {lambda}{sub 0}(1 + {lambda}{sub 0{Delta}}t){sup -2}, which has a power-law slope of p {approx} 2.0 at large waiting times ({Delta}t {approx} 1-1000 hr) and flattens out at short waiting times {Delta}t {approx}

  8. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  9. The Time-Dependent Wavelet Spectrum of HH 1 and 2

    NASA Astrophysics Data System (ADS)

    Raga, A. C.; Reipurth, B.; Esquivel, A.; González-Gómez, D.; Riera, A.

    2018-04-01

    We have calculated the wavelet spectra of four epochs (spanning ≍20 yr) of Hα and [S II] HST images of HH 1 and 2. From these spectra we calculated the distribution functions of the (angular) radii of the emission structures. We found that the size distributions have maxima (corresponding to the characteristic sizes of the observed structures) with radii that are logarithmically spaced with factors of ≍2→3 between the successive peaks. The positions of these peaks generally showed small shifts towards larger sizes as a function of time. This result indicates that the structures of HH 1 and 2 have a general expansion (seen at all scales), and/or are the result of a sequence of merging events resulting in the formation of knots with larger characteristic sizes.

  10. Transmission of electric fields due to distributed cloud charges in the atmosphere-ionosphere system

    NASA Astrophysics Data System (ADS)

    Paul, Suman; De, S. S.; Haldar, D. K.; Guha, G.

    2017-10-01

    The transmission of electric fields in the lower atmosphere by thunder clouds with a suitable charge distribution profile has been modeled. The electromagnetic responses of the atmosphere are presented through Maxwell's equations together with a time-varying source charge distribution. The conductivities are taken to be exponentially graded function of altitude. The radial and vertical electric field components are derived for isotropic, anisotropic and thundercloud regions. The analytical solutions for the total Maxwell's current which flows from the cloud into the ionosphere under DC and quasi-static conditions are obtained for isotropic region. We found that the effect of charge distribution in thunderclouds produced by lightning discharges diminishes rapidly with increasing altitudes. Also, it is found that time to reach Maxwell's currents a maximum is higher for higher altitudes.

  11. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  12. Time Dependent Density Functional Theory Calculations of Large Compact PAH Cations: Implications for the Diffuse Interstellar Bands

    NASA Technical Reports Server (NTRS)

    Weisman, Jennifer L.; Lee, Timothy J.; Salama, Farid; Gordon-Head, Martin; Kwak, Dochan (Technical Monitor)

    2002-01-01

    We investigate the electronic absorption spectra of several maximally pericondensed polycyclic aromatic hydrocarbon radical cations with time dependent density functional theory calculations. We find interesting trends in the vertical excitation energies and oscillator strengths for this series containing pyrene through circumcoronene, the largest species containing more than 50 carbon atoms. We discuss the implications of these new results for the size and structure distribution of the diffuse interstellar band carriers.

  13. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  14. Evaluation of Denoising Strategies to Address Motion-Correlated Artifacts in Resting-State Functional Magnetic Resonance Imaging Data from the Human Connectome Project

    PubMed Central

    Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.

    2016-01-01

    Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276

  15. Design of high temperature ceramic components against fast fracture and time-dependent failure using cares/life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.

    1995-08-01

    A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less

  16. Unified gas-kinetic scheme with multigrid convergence for rarefied flow study

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2017-09-01

    The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.

  17. Stochastic modeling of a serial killer

    PubMed Central

    Simkin, M.V.; Roychowdhury, V.P.

    2014-01-01

    We analyze the time pattern of the activity of a serial killer, who during twelve years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of “Devil’s staircase” type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. PMID:24721476

  18. Stochastic modeling of a serial killer.

    PubMed

    Simkin, M V; Roychowdhury, V P

    2014-08-21

    We analyze the time pattern of the activity of a serial killer, who during 12 years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of "Devil's staircase" type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Exact short-time height distribution for the flat Kardar-Parisi-Zhang interface

    NASA Astrophysics Data System (ADS)

    Smith, Naftali R.; Meerson, Baruch

    2018-05-01

    We determine the exact short-time distribution -lnPf(" close=")H ,t )">H ,t =Sf(H )/√{t } of the one-point height H =h (x =0 ,t ) of an evolving 1 +1 Kardar-Parisi-Zhang (KPZ) interface for flat initial condition. This is achieved by combining (i) the optimal fluctuation method, (ii) a time-reversal symmetry of the KPZ equation in 1 +1 dimension, and (iii) the recently determined exact short-time height distribution -lnPst(H ) of the latter, one encounters two branches: an analytic and a nonanalytic. The analytic branch is nonphysical beyond a critical value of H where a second-order dynamical phase transition occurs. Here we show that, remarkably, it is the analytic branch of Sst(H ) which determines the large-deviation function Sf(H ) of the flat interface via a simple mapping Sf(H )=2-3 /2SstAnomalous transport in fluid field with random waiting time depending on the preceding jump length

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Li, Guo-Hua

    2016-11-01

    Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).

  1. Getting the tail to wag the dog: Incorporating groundwater transport into catchment solute transport models using rank StorAge Selection (rSAS) functions

    NASA Astrophysics Data System (ADS)

    Harman, C. J.

    2015-12-01

    Surface water hydrologic models are increasingly used to analyze the transport of solutes through the landscape, such as nitrate. However, many of these models cannot adequately capture the effect of groundwater flow paths, which can have long travel times and accumulate legacy contaminants, releasing them to streams over decades. If these long lag times are not accounted for, the short-term efficacy of management activities to reduce nitrogen loads may be overestimated. Models that adopt a simple 'well-mixed' assumption, leading to an exponential transit time distribution at steady state, cannot adequately capture the broadly skewed nature of groundwater transit times in typical watersheds. Here I will demonstrate how StorAge Selection functions can be used to capture the long lag times of groundwater in a typical subwatershed-based hydrologic model framework typical of models like SWAT, HSPF, HBV, PRMS and others. These functions can be selected and calibrated to reproduce historical data where available, but can also be fitted to the results of a steady-state groundwater transport model like MODFLOW/MODPATH, allowing those results to directly inform the parameterization of an unsteady surface water model. The long tails of the transit time distribution predicted by the groundwater model can then be completely captured by the surface water model. Examples of this application in the Chesapeake Bay watersheds and elsewhere will be given.

  2. Conserved actions, maximum entropy and dark matter haloes

    NASA Astrophysics Data System (ADS)

    Pontzen, Andrew; Governato, Fabio

    2013-03-01

    We use maximum entropy arguments to derive the phase-space distribution of a virialized dark matter halo. Our distribution function gives an improved representation of the end product of violent relaxation. This is achieved by incorporating physically motivated dynamical constraints (specifically on orbital actions) which prevent arbitrary redistribution of energy. We compare the predictions with three high-resolution dark matter simulations of widely varying mass. The numerical distribution function is accurately predicted by our argument, producing an excellent match for the vast majority of particles. The remaining particles constitute the central cusp of the halo (≲4 per cent of the dark matter). They can be accounted for within the presented framework once the short dynamical time-scales of the centre are taken into account.

  3. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  4. HYPERDIRE-HYPERgeometric functions DIfferential REduction: Mathematica-based packages for the differential reduction of generalized hypergeometric functions: Lauricella function FC of three variables

    NASA Astrophysics Data System (ADS)

    Bytev, Vladimir V.; Kniehl, Bernd A.

    2016-09-01

    We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.

  5. Distribution of the anticancer drugs doxorubicin, mitoxantrone and topotecan in tumors and normal tissues.

    PubMed

    Patel, Krupa J; Trédan, Olivier; Tannock, Ian F

    2013-07-01

    Pharmacokinetic analyses estimate the mean concentration of drug within a given tissue as a function of time, but do not give information about the spatial distribution of drugs within that tissue. Here, we compare the time-dependent spatial distribution of three anticancer drugs within tumors, heart, kidney, liver and brain. Mice bearing various xenografts were treated with doxorubicin, mitoxantrone or topotecan. At various times after injection, tumors and samples of heart, kidney, liver and brain were excised. Within solid tumors, the distribution of doxorubicin, mitoxantrone and topotecan was limited to perivascular regions at 10 min after administration and the distance from blood vessels at which drug intensity fell to half was ~25-75 μm. Although drug distribution improved after 3 and 24 h, there remained a significant decrease in drug fluorescence with increasing distance from tumor blood vessels. Drug distribution was relatively uniform in the heart, kidney and liver with substantially greater perivascular drug uptake than in tumors. There was significantly higher total drug fluorescence in the liver than in tumors after 10 min, 3 and 24 h. Little to no drug fluorescence was observed in the brain. There are marked differences in the spatial distributions of three anticancer drugs within tumor tissue and normal tissues over time, with greater exposure to most normal tissues and limited drug distribution to many cells in tumors. Studies of the spatial distribution of drugs are required to complement pharmacokinetic data in order to better understand and predict drug effects and toxicities.

  6. A surface renewal model for unsteady-state mass transfer using the generalized Danckwerts age distribution function

    PubMed Central

    Horvath, Isabelle R.

    2018-01-01

    The recently derived steady-state generalized Danckwerts age distribution is extended to unsteady-state conditions. For three different wind speeds used by researchers on air–water heat exchange on the Heidelberg Aeolotron, calculations reveal that the distribution has a sharp peak during the initial moments, but flattens out and acquires a bell-shaped character with process time, with the time taken to attain a steady-state profile being a strong and inverse function of wind speed. With increasing wind speed, the age distribution narrows significantly, its skewness decreases and its peak becomes larger. The mean eddy renewal time increases linearly with process time initially but approaches a final steady-state value asymptotically, which decreases dramatically with increased wind speed. Using the distribution to analyse the transient absorption of a gas into a large body of liquid, assuming negligible gas-side mass-transfer resistance, estimates are made of the gas-absorption and dissolved-gas transfer coefficients for oxygen absorption in water at 25°C for the three different wind speeds. Under unsteady-state conditions, these two coefficients show an inverse behaviour, indicating a heightened accumulation of dissolved gas in the surface elements, especially during the initial moments of absorption. However, the two mass-transfer coefficients start merging together as the steady state is approached. Theoretical predictions of the steady-state mass-transfer coefficient or transfer velocity are in fair agreement (average absolute error of prediction = 18.1%) with some experimental measurements of the same for the nitrous oxide–water system at 20°C that were made in the Heidelberg Aeolotron. PMID:29892429

  7. Functional Literacy and Continuing Education by Television

    ERIC Educational Resources Information Center

    Paiva e Souza, Alfredina de

    1970-01-01

    As a result of a pilot project (in Rio de Janeiro) of functional literacy for adolescents and adults by television, 90 percent of the students in experimental tele-classes" became literate with 36 broadcasts of 20 minutes each, distributed over three months three times each week, supported by 50 minutes of discussion and other activities…

  8. Decoupling function and anatomy in atlases of functional connectivity patterns: language mapping in tumor patients.

    PubMed

    Langs, Georg; Sweet, Andrew; Lashkari, Danial; Tie, Yanmei; Rigolo, Laura; Golby, Alexandra J; Golland, Polina

    2014-12-01

    In this paper we construct an atlas that summarizes functional connectivity characteristics of a cognitive process from a population of individuals. The atlas encodes functional connectivity structure in a low-dimensional embedding space that is derived from a diffusion process on a graph that represents correlations of fMRI time courses. The functional atlas is decoupled from the anatomical space, and thus can represent functional networks with variable spatial distribution in a population. In practice the atlas is represented by a common prior distribution for the embedded fMRI signals of all subjects. We derive an algorithm for fitting this generative model to the observed data in a population. Our results in a language fMRI study demonstrate that the method identifies coherent and functionally equivalent regions across subjects. The method also successfully maps functional networks from a healthy population used as a training set to individuals whose language networks are affected by tumors. Copyright © 2014. Published by Elsevier Inc.

  9. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  10. An exactly solvable coarse-grained model for species diversity

    NASA Astrophysics Data System (ADS)

    Suweis, Samir; Rinaldo, Andrea; Maritan, Amos

    2012-07-01

    We present novel analytical results concerning ecosystem species diversity that stem from a proposed coarse-grained neutral model based on birth-death processes. The relevance of the problem lies in the urgency for understanding and synthesizing both theoretical results from ecological neutral theory and empirical evidence on species diversity preservation. The neutral model of biodiversity deals with ecosystems at the same trophic level, where per capita vital rates are assumed to be species independent. Closed-form analytical solutions for the neutral theory are obtained within a coarse-grained model, where the only input is the species persistence time distribution. Our results pertain to: the probability distribution function of the number of species in the ecosystem, both in transient and in stationary states; the n-point connected time correlation function; and the survival probability, defined as the distribution of time spans to local extinction for a species randomly sampled from the community. Analytical predictions are also tested on empirical data from an estuarine fish ecosystem. We find that emerging properties of the ecosystem are very robust and do not depend on specific details of the model, with implications for biodiversity and conservation biology.

  11. Equipartition terms in transition path ensemble: Insights from molecular dynamics simulations of alanine dipeptide.

    PubMed

    Li, Wenjin

    2018-02-28

    Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.

  12. Angular distribution of ions and extreme ultraviolet emission in laser-produced tin droplet plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hong; Duan, Lian; Lan, Hui

    Angular-resolved ion time-of-flight spectra as well as extreme ultraviolet radiation in laser-produced tin droplet plasma are investigated experimentally and theoretically. Tin droplets with a diameter of 150 μm are irradiated by a pulsed Nd:YAG laser. The ion time-of-flight spectra measured from the plasma formed by laser irradiation of the tin droplets are interpreted in terms of a theoretical elliptical Druyvesteyn distribution to deduce ion density distributions including kinetic temperatures of the plasma. The opacity of the plasma for extreme ultraviolet radiation is calculated based on the deduced ion densities and temperatures, and the angular distribution of extreme ultraviolet radiation is expressedmore » as a function of the opacity using the Beer–Lambert law. Our results show that the calculated angular distribution of extreme ultraviolet radiation is in satisfactory agreement with the experimental data.« less

  13. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  14. Angular distribution of ions and extreme ultraviolet emission in laser-produced tin droplet plasma

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wang, Xinbing; Duan, Lian; Lan, Hui; Chen, Ziqi; Zuo, Duluo; Lu, Peixiang

    2015-05-01

    Angular-resolved ion time-of-flight spectra as well as extreme ultraviolet radiation in laser-produced tin droplet plasma are investigated experimentally and theoretically. Tin droplets with a diameter of 150 μm are irradiated by a pulsed Nd:YAG laser. The ion time-of-flight spectra measured from the plasma formed by laser irradiation of the tin droplets are interpreted in terms of a theoretical elliptical Druyvesteyn distribution to deduce ion density distributions including kinetic temperatures of the plasma. The opacity of the plasma for extreme ultraviolet radiation is calculated based on the deduced ion densities and temperatures, and the angular distribution of extreme ultraviolet radiation is expressed as a function of the opacity using the Beer-Lambert law. Our results show that the calculated angular distribution of extreme ultraviolet radiation is in satisfactory agreement with the experimental data.

  15. Inversion Analysis of Postseismic Deformation in Poroelastic Material Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Kawamoto, S.; Ito, T.; Hirahara, K.

    2005-12-01

    Following a large earthquake, postseismic deformations in the focal source region have been observed by several geodetic measurements. To explain the postseismic deformations, researchers have proposed some physical mechanisms known as afterslip, viscoelastic relaxation and poroelastic rebound. There are a number of studies about postseismic deformations but for poroelastic rebound. So, we calculated the postseismic deformations caused by afterslip and poroelastic rebound using modified FEM code _eCAMBIOT3D_f originally developed by Geotech. Lab. Gunma University, Japan (2003). The postseismic deformations caused by both afterslip and poroelastic rebound are characteristically different from those caused only by afterslip. This suggests that the slip distributions on the fault estimated from geodetic measurements also change. Because of this, we developed the inversion method that accounts for both afterslip and poroelastic rebound using FEM to estimate the difference of slip distributions on the fault quantitatively. The inversion analysis takes following steps. First, we calculate the coseismic and postseismic response functions on each fault segment induced by the unit slip. Where postseismic response function indicate the poroelastic rebound. Next, we make the observation equations at each time step using the response functions and estimate the spatiotemporal distribution of slip on the fault. In solving this inverse problem, we assume the slip distributions on the fault are smooth in space and time except for rapid change (coseismic change). Because the hyperparameters that control the smoothness of spatial and temporal distributions of slip are needed, we determine the best hyperparameters using ABIC. In this presentation, we introduce the example of analysis results using this method.

  16. The DIAS/CEOS Water Portal, distributed system using brokering architecture

    NASA Astrophysics Data System (ADS)

    Miura, Satoko; Sekioka, Shinichi; Kuroiwa, Kaori; Kudo, Yoshiyuki

    2015-04-01

    The DIAS/CEOS Water Portal is a one of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. This portal has two main functions; one is to search and access data and the other is to register and share use cases which use datasets provided via this portal. This presentation focuses on the first function, to search and access data. The Portal system is distributed in the sense that, while the portal system is located in Tokyo, the data is located in archive centers which are globally distributed. For example, some in-situ data is archived at the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory in Boulder, Colorado, USA. The NWP station time series and global gridded model output data is archived at the Max Planck Institute for Meteorology (MPIM) in cooperation with the World Data Center for Climate in Hamburg, Germany. Part of satellite data is archived at DIAS storage at the University of Tokyo, Japan. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time, like one-stop shopping. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat), Opensearch protocol and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.

  17. Multimission Telemetry Visualization (MTV) system: A mission applications project from JPL's Multimedia Communications Laboratory

    NASA Technical Reports Server (NTRS)

    Koeberlein, Ernest, III; Pender, Shaw Exum

    1994-01-01

    This paper describes the Multimission Telemetry Visualization (MTV) data acquisition/distribution system. MTV was developed by JPL's Multimedia Communications Laboratory (MCL) and designed to process and display digital, real-time, science and engineering data from JPL's Mission Control Center. The MTV system can be accessed using UNIX workstations and PC's over common datacom and telecom networks from worldwide locations. It is designed to lower data distribution costs while increasing data analysis functionality by integrating low-cost, off-the-shelf desktop hardware and software. MTV is expected to significantly lower the cost of real-time data display, processing, distribution, and allow for greater spacecraft safety and mission data access.

  18. The Density Functional Theory of Flies: Predicting distributions of interacting active organisms

    NASA Astrophysics Data System (ADS)

    Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas

    On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.

  19. Vlasov Treatment of Coherent Synchrotron Radiation from Arbitrary Planar Orbits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warnock, R

    2004-09-22

    We study the influence of coherent synchrotron radiation (CSR) on particle bunches traveling on arbitrary planar orbits between parallel conducting plates. The plates represent shielding due to the vacuum chamber. The vertical distribution of charge is an arbitrary fixed function. Our goal is to follow the time evolution of the phase space distribution by solving the Vlasov-Maxwell equations in the time domain. This provides simulations with lower numerical noise than the macroparticle method, and allows one to study such issues as emittance degradation and microbunching due to CSR in bunch compressors. The fields excited by the bunch are computed inmore » the laboratory frame from a new formula that leads to much simpler computations than the usual retarded potentials or Lienard-Wiechert potentials. The nonlinear Vlasov equation, formulated in the interaction picture, is integrated in the beam frame by approximating the Perron-Frobenius operator. The distribution function is represented by B-splines, in a scheme preserving positivity and normalization of the distribution. For application to a chicane bunch compressor we take steps to deal with energy chirp, an initial near-perfect correlation of energy with position in the bunch.« less

  20. Partonic quasidistributions of the proton and pion from transverse-momentum distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Arriola, Enrique Ruiz

    2018-02-01

    The parton quasidistribution functions (QDFs) of Ji have been found by Radyushkin to be directly related to the transverse momentum distributions (TMDs), to the pseudodistributions, and to the Ioffe-time distributions (ITDs). This makes the QDF results at finite longitudinal momentum of the hadron interesting in their own right. Moreover, the QDF-TMD relation provides a gateway to the pertinent QCD evolution, with respect to the resolution scale Q , for the QDFs. Using the Kwieciński evolution equations and well established parametrizations at a low initial scale, we analyze the QCD evolution of quark and gluon QDF components of the proton and the pion. We discuss the resulting breaking of the longitudinal-transverse factorization and show that it has little impact on QDFs at the relatively low scales presently accessible on the lattice, but the effect is visible in reduced ITDs at sufficiently large values of the Ioffe time. Sum rules involving derivatives of ITDs and moments of the parton distribution functions (PDFs) are applied to the European Twisted Mass Collaboration lattice data. This allows us for a lattice determination of the transverse-momentum width of the TMDs from QDF studies.

  1. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    PubMed

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  2. Comparison of different functional EIT approaches to quantify tidal ventilation distribution.

    PubMed

    Zhao, Zhanqi; Yun, Po-Jen; Kuo, Yen-Liang; Fu, Feng; Dai, Meng; Frerichs, Inez; Möller, Knut

    2018-01-30

    The aim of the study was to examine the pros and cons of different types of functional EIT (fEIT) to quantify tidal ventilation distribution in a clinical setting. fEIT images were calculated with (1) standard deviation of pixel time curve, (2) regression coefficients of global and local impedance time curves, or (3) mean tidal variations. To characterize temporal heterogeneity of tidal ventilation distribution, another fEIT image of pixel inspiration times is also proposed. fEIT-regression is very robust to signals with different phase information. When the respiratory signal should be distinguished from the heart-beat related signal, or during high-frequency oscillatory ventilation, fEIT-regression is superior to other types. fEIT-tidal variation is the most stable image type regarding the baseline shift. We recommend using this type of fEIT image for preliminary evaluation of the acquired EIT data. However, all these fEITs would be misleading in their assessment of ventilation distribution in the presence of temporal heterogeneity. The analysis software provided by the currently available commercial EIT equipment only offers either fEIT of standard deviation or tidal variation. Considering the pros and cons of each fEIT type, we recommend embedding more types into the analysis software to allow the physicians dealing with more complex clinical applications with on-line EIT measurements.

  3. Matching Pursuit with Asymmetric Functions for Signal Decomposition and Parameterization

    PubMed Central

    Spustek, Tomasz; Jedrzejczak, Wiesław Wiktor; Blinowska, Katarzyna Joanna

    2015-01-01

    The method of adaptive approximations by Matching Pursuit makes it possible to decompose signals into basic components (called atoms). The approach relies on fitting, in an iterative way, functions from a large predefined set (called dictionary) to an analyzed signal. Usually, symmetric functions coming from the Gabor family (sine modulated Gaussian) are used. However Gabor functions may not be optimal in describing waveforms present in physiological and medical signals. Many biomedical signals contain asymmetric components, usually with a steep rise and slower decay. For the decomposition of this kind of signal we introduce a dictionary of functions of various degrees of asymmetry – from symmetric Gabor atoms to highly asymmetric waveforms. The application of this enriched dictionary to Otoacoustic Emissions and Steady-State Visually Evoked Potentials demonstrated the advantages of the proposed method. The approach provides more sparse representation, allows for correct determination of the latencies of the components and removes the "energy leakage" effect generated by symmetric waveforms that do not sufficiently match the structures of the analyzed signal. Additionally, we introduced a time-frequency-amplitude distribution that is more adequate for representation of asymmetric atoms than the conventional time-frequency-energy distribution. PMID:26115480

  4. Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion

    NASA Astrophysics Data System (ADS)

    Tsankov, Tsanko V.; Czarnetzki, Uwe

    2017-05-01

    Non-equilibrium distribution functions of electrons and ions play an important role in plasma physics. A prominent example is the kinetic Bohm criterion. Since its first introduction it has been controversial for theoretical reasons and due to the lack of experimental data, in particular on the ion distribution function. Here we resolve the theoretical as well as the experimental difficulties by an exact solution of the kinetic Boltzmann equation including charge exchange collisions and ionization. This also allows for the first time non-invasive measurement of spatially resolved ion velocity distributions, absolute values of the ion and electron densities, temperatures, and mean energies as well as the electric field and the plasma potential in the entire plasma. The non-invasive access to the spatially resolved distribution functions of electrons and ions is applied to the problem of the kinetic Bohm criterion. Theoretically a so far missing term in the criterion is derived and shown to be of key importance. With the new term the validity of the kinetic criterion at high collisionality and its agreement with the fluid picture are restored. All findings are supported by experimental data, theory and a numerical model with excellent agreement throughout.

  5. Memory for Context becomes Less Specific with Time

    ERIC Educational Resources Information Center

    Wiltgen, Brian J.; Silva, Alcino J.

    2007-01-01

    Context memories initially require the hippocampus, but over time become independent of this structure. This shift reflects a consolidation process whereby memories are gradually stored in distributed regions of the cortex. The function of this process is thought to be the extraction of statistical regularities and general knowledge from specific…

  6. Intermodal transport and distribution patterns in ports relationship to hinterland

    NASA Astrophysics Data System (ADS)

    Dinu, O.; Dragu, V.; Ruscă, F.; Ilie, A.; Oprea, C.

    2017-08-01

    It is of great importance to examine all interactions between ports, terminals, intermodal transport and logistic actors of distribution channels, as their optimization can lead to operational improvement. Proposed paper starts with a brief overview of different goods types and allocation of their logistic costs, with emphasis on storage component. Present trend is to optimize storage costs by means of port storage area buffer function, by making the best use of free storage time available, most of the ports offer. As a research methodology, starting point is to consider the cost structure of a generic intermodal transport (storage, handling and transport costs) and to link this to intermodal distribution patterns most frequently cast-off in port relationship to hinterland. The next step is to evaluate storage costs impact on distribution pattern selection. For a given value of port free storage time, a corresponding value of total storage time in the distribution channel can be identified, in order to substantiate a distribution pattern shift. Different scenarios for transport and handling costs variation, recorded when distribution pattern shift, are integrated in order to establish the reaction of the actors involved in port related logistic and intermodal transport costs evolution is analysed in order to optimize distribution pattern selection.

  7. Statistical properties of effective drought index (EDI) for Seoul, Busan, Daegu, Mokpo in South Korea

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyeok; Kim, Ki-Beom; Chang, Heon-Young

    2014-08-01

    Time series of drought indices has been considered mostly in view of temporal and spatial distributions of a drought index so far. Here we investigate the statistical properties of a daily Effective Drought Index (EDI) itself for Seoul, Busan, Daegu, Mokpo for the period of 100 years from 1913 to 2012. We have found that both in dry and wet seasons the distribution of EDI as a function of EDI follows the Gaussian function. In dry season the shape of the Gaussian function is characteristically broader than that in wet seasons. The total number of drought days during the period we have analyzed is related both to the mean value and more importantly to the standard deviation. We have also found that according to the distribution of the number of occasions where the EDI values of several consecutive days are all less than a threshold, the distribution follows the exponential distribution. The slope of the best fit becomes steeper not only as the critical EDI value becomes more negative but also as the number of consecutive days increases. The slope of the exponential distribution becomes steeper as the number of the city in which EDI is simultaneously less than a critical EDI in a row increases. Finally, we conclude by pointing out implications of our findings.

  8. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  9. A framework for real-time distributed expert systems: On-orbit spacecraft fault diagnosis, monitoring and control

    NASA Technical Reports Server (NTRS)

    Mullikin, Richard L.

    1987-01-01

    Control of on-orbit operation of a spacecraft requires retention and application of special purpose, often unique, knowledge of equipment and procedures. Real-time distributed expert systems (RTDES) permit a modular approach to a complex application such as on-orbit spacecraft support. One aspect of a human-machine system that lends itself to the application of RTDES is the function of satellite/mission controllers - the next logical step toward the creation of truly autonomous spacecraft systems. This system application is described.

  10. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  11. Generalized hamming networks and applications.

    PubMed

    Koutroumbas, Konstantinos; Kalouptsidis, Nicholas

    2005-09-01

    In this paper the classical Hamming network is generalized in various ways. First, for the Hamming maxnet, a generalized model is proposed, which covers under its umbrella most of the existing versions of the Hamming Maxnet. The network dynamics are time varying while the commonly used ramp function may be replaced by a much more general non-linear function. Also, the weight parameters of the network are time varying. A detailed convergence analysis is provided. A bound on the number of iterations required for convergence is derived and its distribution functions are given for the cases where the initial values of the nodes of the Hamming maxnet stem from the uniform and the peak distributions. Stabilization mechanisms aiming to prevent the node(s) with the maximum initial value diverging to infinity or decaying to zero are described. Simulations demonstrate the advantages of the proposed extension. Also, a rough comparison between the proposed generalized scheme as well as the original Hamming maxnet and its variants is carried out in terms of the time required for convergence, in hardware implementations. Finally, the other two parts of the Hamming network, namely the competitors generating module and the decoding module, are briefly considered in the framework of various applications such as classification/clustering, vector quantization and function optimization.

  12. Time-resolved ion velocity distribution in a cylindrical Hall thruster: heterodyne-based experiment and modeling.

    PubMed

    Diallo, A; Keller, S; Shi, Y; Raitses, Y; Mazouffre, S

    2015-03-01

    Time-resolved variations of the ion velocity distribution function (IVDF) are measured in the cylindrical Hall thruster using a novel heterodyne method based on the laser-induced fluorescence technique. This method consists in inducing modulations of the discharge plasma at frequencies that enable the coupling to the breathing mode. Using a harmonic decomposition of the IVDF, one can extract each harmonic component of the IVDF from which the time-resolved IVDF is reconstructed. In addition, simulations have been performed assuming a sloshing of the IVDF during the modulation that show agreement between the simulated and measured first order perturbation of the IVDF.

  13. Voronoi Tessellation for reducing the processing time of correlation functions

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio

    2018-01-01

    The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.

  14. Optimal dynamic control of resources in a distributed system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang

    1989-01-01

    The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.

  15. Phase space explorations in time dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Rajam, Aruna K.

    Time dependent density functional theory (TDDFT) is one of the useful tools for the study of the dynamic behavior of correlated electronic systems under the influence of external potentials. The success of this formally exact theory practically relies on approximations for the exchange-correlation potential which is a complicated functional of the co-ordinate density, non-local in space and time. Adiabatic approximations (such as ALDA), which are local in time, are most commonly used in the increasing applications of the field. Going beyond ALDA, has been proved difficult leading to mathematical inconsistencies. We explore the regions where the theory faces challenges, and try to answer some of them via the insights from two electron model systems. In this thesis work we propose a phase-space extension of the TDDFT. We want to answer the challenges the theory is facing currently by exploring the one-body phase-space. We give a general introduction to this theory and its mathematical background in the first chapter. In second chapter, we carryout a detailed study of instantaneous phase-space densities and argue that the functionals of distributions can be a better alternative to the nonlocality issue of the exchange-correlation potentials. For this we study in detail the interacting and the non-interacting phase-space distributions for Hookes atom model. The applicability of ALDA-based TDDFT for the dynamics in strongfields can become severely problematic due to the failure of single-Slater determinant picture.. In the third chapter, we analyze how the phase-space distributions can shine some light into this problem. We do a comparative study of Kohn-Sham and interacting phase-space and momentum distributions for single ionization and double ionization systems. Using a simple model of two-electron systems, we have showed that the momentum distribution computed directly from the exact KS system contains spurious oscillations: a non-classical description of the essentially classical two-electron dynamics. In Time dependent density matrix functional theory (TDDMFT), the evolution scheme of the 1RDM (first order reduced density matrix) contains second-order reduced density matrix (2RDM), which has to be expressed in terms of 1RDMs. Any non-correlated approximations (Hartree-Fock) for 2RDM would fail to capture the natural occupations of the system. In our fourth chapter, we show that by applying the quasi-classical and semi-classical approximations one can capture the natural occupations of the excited systems. We study a time-dependent Moshinsky atom model for this. The fifth chapter contains a comparative work on the existing non-local exchange-correlation kernels that are based on current density response frame work and the co-moving frame work. We show that the two approaches though coinciding with each other in linear response regime, actually turn out to be different in non-linear regime.

  16. Nanopore Kinetic Proofreading of DNA Sequences

    NASA Astrophysics Data System (ADS)

    Ling, Xinsheng Sean

    The concept of DNA sequencing using the time dependence of the nanopore ionic current was proposed in 1996 by Kasianowicz, Brandin, Branton, and Deamer (KBBD). The KBBD concept has generated tremendous amount interests in recent decade. In this talk, I will review the current understanding of the DNA ``translocation'' dynamics and how it can be described by Schrodinger's 1915 paper on first-passage-time distribution function. Schrodinger's distribution function can be used to give a rigorous criterion for achieving nanopore DNA sequencing which turns out to be identical to that of gel electrophoresis used by Sanger in the first-generation Sanger method. A nanopore DNA sequencing technology also requires discrimination of bases with high accuracies. I will describe a solid-state nanopore sandwich structure that can function as a proofreading device capable of discriminating between correct and incorrect hybridization probes with an accuracy rivaling that of high-fidelity DNA polymerases. The latest results from Nanjing will be presented. This work is supported by China 1000-Talent Program at Southeast University, Nanjing, China.

  17. ParallelStructure: A R Package to Distribute Parallel Runs of the Population Genetics Program STRUCTURE on Multi-Core Computers

    PubMed Central

    Besnier, Francois; Glover, Kevin A.

    2013-01-01

    This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012

  18. A radially resolved kinetic model for nonlocal electron ripple diffusion losses in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott

    A relatively simple radially resolved kinetic model is applied to the ripple diffusion problem for electrons in tokamaks. The distribution function f(r,v) is defined on a two-dimensional grid, where r is the radial coordinate and v is the velocity coordinate. Particle transport in the radial direction is from ripple and banana diffusion and transport in the velocity direction is described by the Fokker-Planck equation. Particles and energy are replaced by source functions that are adjusted to maintain a constant central density and temperature. The relaxed profiles of f(r,v) show that the electron distribution function at the wall contains suprathermal electronsmore » that have diffused from the interior that enhance ripple transport. The transport at the periphery is therefore nonlocal. The energy replacement times from the computational model are near to the experimental replacement times for tokamak discharges in the compilation by Pfeiffer and Waltz [Nucl. Fusion 19, 51 (1979)].« less

  19. Collapsar γ-ray bursts: how the luminosity function dictates the duration distribution

    NASA Astrophysics Data System (ADS)

    Petropoulou, Maria; Barniol Duran, Rodolfo; Giannios, Dimitrios

    2017-12-01

    Jets in long-duration γ-ray bursts (GRBs) have to drill through the collapsing star in order to break out of it and produce the γ-ray signal while the central engine is still active. If the breakout time is shorter for more powerful engines, then the jet-collapsar interaction acts as a filter of less luminous jets. We show that the observed broken power-law GRB luminosity function is a natural outcome of this process. For a theoretically motivated breakout time that scales with jet luminosity as L-χ with χ ∼ 1/3-1/2, we show that the shape of the γ-ray duration distribution can be uniquely determined by the GRB luminosity function and matches the observed one. This analysis has also interesting implications about the supernova-central engine connection. We show that not only successful jets can deposit sufficient energy in the stellar envelope to power the GRB-associated supernovae, but also failed jets may operate in all Type Ib/c supernovae.

  20. Global exponential stability analysis on impulsive BAM neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Li, Yao-Tang; Yang, Chang-Bo

    2006-12-01

    Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.

  1. Research into software executives for space operations support

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.

    1990-01-01

    Research concepts pertaining to a software (workstation) executive which will support a distributed processing command and control system characterized by high-performance graphics workstations used as computing nodes are presented. Although a workstation-based distributed processing environment offers many advantages, it also introduces a number of new concerns. In order to solve these problems, allow the environment to function as an integrated system, and present a functional development environment to application programmers, it is necessary to develop an additional layer of software. This 'executive' software integrates the system, provides real-time capabilities, and provides the tools necessary to support the application requirements.

  2. Weak values of a quantum observable and the cross-Wigner distribution.

    PubMed

    de Gosson, Maurice A; de Gosson, Serge M

    2012-01-09

    We study the weak values of a quantum observable from the point of view of the Wigner formalism. The main actor here is the cross-Wigner transform of two functions, which is in disguise the cross-ambiguity function familiar from radar theory and time-frequency analysis. It allows us to express weak values using a complex probability distribution. We suggest that our approach seems to confirm that the weak value of an observable is, as conjectured by several authors, due to the interference of two wavefunctions, one coming from the past, and the other from the future.

  3. An empirical analysis of the distribution of the duration of overshoots in a stationary gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Parrish, R. S.; Carter, M. C.

    1974-01-01

    This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.

  4. Hybrid artificial bee colony algorithm for parameter optimization of five-parameter bidirectional reflectance distribution function model.

    PubMed

    Wang, Qianqian; Zhao, Jing; Gong, Yong; Hao, Qun; Peng, Zhong

    2017-11-20

    A hybrid artificial bee colony (ABC) algorithm inspired by the best-so-far solution and bacterial chemotaxis was introduced to optimize the parameters of the five-parameter bidirectional reflectance distribution function (BRDF) model. To verify the performance of the hybrid ABC algorithm, we measured BRDF of three kinds of samples and simulated the undetermined parameters of the five-parameter BRDF model using the hybrid ABC algorithm and the genetic algorithm, respectively. The experimental results demonstrate that the hybrid ABC algorithm outperforms the genetic algorithm in convergence speed, accuracy, and time efficiency under the same conditions.

  5. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    NASA Astrophysics Data System (ADS)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  6. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.

  7. On the emergence of a generalised Gamma distribution. Application to traded volume in financial markets

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, S. M.

    2005-08-01

    This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.

  8. Inner Magnetospheric Superthermal Electron Transport: Photoelectron and Plasma Sheet Electron Sources

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Liemohn, M. W.; Kozyra, J. U.; Moore, T. E.

    1998-01-01

    Two time-dependent kinetic models of superthermal electron transport are combined to conduct global calculations of the nonthermal electron distribution function throughout the inner magnetosphere. It is shown that the energy range of validity for this combined model extends down to the superthermal-thermal intersection at a few eV, allowing for the calculation of the en- tire distribution function and thus an accurate heating rate to the thermal plasma. Because of the linearity of the formulas, the source terms are separated to calculate the distributions from the various populations, namely photoelectrons (PEs) and plasma sheet electrons (PSEs). These distributions are discussed in detail, examining the processes responsible for their formation in the various regions of the inner magnetosphere. It is shown that convection, corotation, and Coulomb collisions are the dominant processes in the formation of the PE distribution function and that PSEs are dominated by the interplay between the drift terms. Of note is that the PEs propagate around the nightside in a narrow channel at the edge of the plasmasphere as Coulomb collisions reduce the fluxes inside of this and convection compresses the flux tubes inward. These distributions are then recombined to show the development of the total superthermal electron distribution function in the inner magnetosphere and their influence on the thermal plasma. PEs usually dominate the dayside heating, with integral energy fluxes to the ionosphere reaching 10(exp 10) eV/sq cm/s in the plasmasphere, while heating from the PSEs typically does not exceed 10(exp 8) eV/sq cm/s. On the nightside, the inner plasmasphere is usually unheated by superthermal electrons. A feature of these combined spectra is that the distribution often has upward slopes with energy, particularly at the crossover from PE to PSE dominance, indicating that instabilities are possible.

  9. Time-resolved two-window measurement of Wigner functions for coherent backscatter from a turbid medium

    NASA Astrophysics Data System (ADS)

    Reil, Frank; Thomas, John E.

    2002-05-01

    For the first time we are able to observe the time-resolved Wigner function of enhanced backscatter from a random medium using a novel two-window technique. This technique enables us to directly verify the phase-conjugating properties of random media. An incident divergent beam displays a convergent enhanced backscatter cone. We measure the joint position and momentum (x, p) distributions of the light field as a function of propagation time in the medium. The two-window technique allows us to independently control the resolutions for position and momentum, thereby surpassing the uncertainty limit associated with Fourier transform pairs. By using a low-coherence light source in a heterodyne detection scheme, we observe enhanced backscattering resolved by path length in the random medium, providing information about the evolution of optical coherence as a function of penetration depth in the random medium.

  10. Development of a calculation method for estimating specific energy distribution in complex radiation fields.

    PubMed

    Sato, Tatsuhiko; Watanabe, Ritsuko; Niita, Koji

    2006-01-01

    Estimation of the specific energy distribution in a human body exposed to complex radiation fields is of great importance in the planning of long-term space missions and heavy ion cancer therapies. With the aim of developing a tool for this estimation, the specific energy distributions in liquid water around the tracks of several HZE particles with energies up to 100 GeV n(-1) were calculated by performing track structure simulation with the Monte Carlo technique. In the simulation, the targets were assumed to be spherical sites with diameters from 1 nm to 1 microm. An analytical function to reproduce the simulation results was developed in order to predict the distributions of all kinds of heavy ions over a wide energy range. The incorporation of this function into the Particle and Heavy Ion Transport code System (PHITS) enables us to calculate the specific energy distributions in complex radiation fields in a short computational time.

  11. Temperature dependence of the size distribution function of InAs quantum dots on GaAs(001)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arciprete, F.; Fanfoni, M.; Patella, F.

    2010-04-15

    We present a detailed atomic-force-microscopy study of the effect of annealing on InAs/GaAs(001) quantum dots grown by molecular-beam epitaxy. Samples were grown at a low growth rate at 500 deg. C with an InAs coverage slightly greater than critical thickness and subsequently annealed at several temperatures. We find that immediately quenched samples exhibit a bimodal size distribution with a high density of small dots (<50 nm{sup 3}) while annealing at temperatures greater than 420 deg. C leads to a unimodal size distribution. This result indicates a coarsening process governing the evolution of the island size distribution function which is limitedmore » by the attachment-detachment of the adatoms at the island boundary. At higher temperatures one cannot ascribe a single rate-determining step for coarsening because of the increased role of adatom diffusion. However, for long annealing times at 500 deg. C the island size distribution is strongly affected by In desorption.« less

  12. Multiple-parameter bifurcation analysis in a Kuramoto model with time delay and distributed shear

    NASA Astrophysics Data System (ADS)

    Niu, Ben; Zhang, Jiaming; Wei, Junjie

    2018-05-01

    In this paper, time delay effect and distributed shear are considered in the Kuramoto model. On the Ott-Antonsen's manifold, through analyzing the associated characteristic equation of the reduced functional differential equation, the stability boundary of the incoherent state is derived in multiple-parameter space. Moreover, very rich dynamical behavior such as stability switches inducing synchronization switches can occur in this equation. With the loss of stability, Hopf bifurcating coherent states arise, and the criticality of Hopf bifurcations is determined by applying the normal form theory and the center manifold theorem. On one hand, theoretical analysis indicates that the width of shear distribution and time delay can both eliminate the synchronization then lead the Kuramoto model to incoherence. On the other, time delay can induce several coexisting coherent states. Finally, some numerical simulations are given to support the obtained results where several bifurcation diagrams are drawn, and the effect of time delay and shear is discussed.

  13. An approach for generating trajectory-based dynamics which conserves the canonical distribution in the phase space formulation of quantum mechanics. II. Thermal correlation functions.

    PubMed

    Liu, Jian; Miller, William H

    2011-03-14

    We show the exact expression of the quantum mechanical time correlation function in the phase space formulation of quantum mechanics. The trajectory-based dynamics that conserves the quantum canonical distribution-equilibrium Liouville dynamics (ELD) proposed in Paper I is then used to approximately evaluate the exact expression. It gives exact thermal correlation functions (of even nonlinear operators, i.e., nonlinear functions of position or momentum operators) in the classical, high temperature, and harmonic limits. Various methods have been presented for the implementation of ELD. Numerical tests of the ELD approach in the Wigner or Husimi phase space have been made for a harmonic oscillator and two strongly anharmonic model problems, for each potential autocorrelation functions of both linear and nonlinear operators have been calculated. It suggests ELD can be a potentially useful approach for describing quantum effects for complex systems in condense phase.

  14. Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA

    NASA Astrophysics Data System (ADS)

    He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong

    2018-04-01

    This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.

  15. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  16. Thermal noise in confined fluids.

    PubMed

    Sanghi, T; Aluru, N R

    2014-11-07

    In this work, we discuss a combined memory function equation (MFE) and generalized Langevin equation (GLE) approach (referred to as MFE/GLE formulation) to characterize thermal noise in confined fluids. Our study reveals that for fluids confined inside nanoscale geometries, the correlation time and the time decay of the autocorrelation function of the thermal noise are not significantly different across the confinement. We show that it is the strong cross-correlation of the mean force with the molecular velocity that gives rise to the spatial anisotropy in the velocity-autocorrelation function of the confined fluids. Further, we use the MFE/GLE formulation to extract the thermal force a fluid molecule experiences in a MD simulation. Noise extraction from MD simulation suggests that the frequency distribution of the thermal force is non-Gaussian. Also, the frequency distribution of the thermal force near the confining surface is found to be different in the direction parallel and perpendicular to the confinement. We also use the formulation to compute the noise correlation time of water confined inside a (6,6) carbon-nanotube (CNT). It is observed that inside the (6,6) CNT, in which water arranges itself in a highly concerted single-file arrangement, the correlation time of thermal noise is about an order of magnitude higher than that of bulk water.

  17. Thermal noise in confined fluids

    NASA Astrophysics Data System (ADS)

    Sanghi, T.; Aluru, N. R.

    2014-11-01

    In this work, we discuss a combined memory function equation (MFE) and generalized Langevin equation (GLE) approach (referred to as MFE/GLE formulation) to characterize thermal noise in confined fluids. Our study reveals that for fluids confined inside nanoscale geometries, the correlation time and the time decay of the autocorrelation function of the thermal noise are not significantly different across the confinement. We show that it is the strong cross-correlation of the mean force with the molecular velocity that gives rise to the spatial anisotropy in the velocity-autocorrelation function of the confined fluids. Further, we use the MFE/GLE formulation to extract the thermal force a fluid molecule experiences in a MD simulation. Noise extraction from MD simulation suggests that the frequency distribution of the thermal force is non-Gaussian. Also, the frequency distribution of the thermal force near the confining surface is found to be different in the direction parallel and perpendicular to the confinement. We also use the formulation to compute the noise correlation time of water confined inside a (6,6) carbon-nanotube (CNT). It is observed that inside the (6,6) CNT, in which water arranges itself in a highly concerted single-file arrangement, the correlation time of thermal noise is about an order of magnitude higher than that of bulk water.

  18. Design of Distributed Controllers Seeking Optimal Power Flow Solutions Under Communication Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  19. Design of Distributed Controllers Seeking Optimal Power Flow Solutions under Communication Constraints: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  20. Modelling population distribution using remote sensing imagery and location-based data

    NASA Astrophysics Data System (ADS)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.

  1. Vibration monitoring of a helicopter blade model using the optical fiber distributed strain sensing technique.

    PubMed

    Wada, Daichi; Igawa, Hirotaka; Kasai, Tokio

    2016-09-01

    We demonstrate a dynamic distributed monitoring technique using a long-length fiber Bragg grating (FBG) interrogated by optical frequency domain reflectometry (OFDR) that measures strain at a speed of 150 Hz, spatial resolution of 1 mm, and measurement range of 20 m. A 5 m FBG is bonded to a 5.5 m helicopter blade model, and vibration is applied by the step relaxation method. The time domain responses of the strain distributions are measured, and the blade deflections are calculated based on the strain distributions. Frequency response functions are obtained using the time domain responses of the calculated deflection induced by the preload release, and the modal parameters are retrieved. Experimental results demonstrated the dynamic monitoring performances and the applicability to the modal analysis of the OFDR-FBG technique.

  2. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  3. Interval timing under a behavioral microscope: Dissociating motivational and timing processes in fixed-interval performance.

    PubMed

    Daniels, Carter W; Sanabria, Federico

    2017-03-01

    The distribution of latencies and interresponse times (IRTs) of rats was compared between two fixed-interval (FI) schedules of food reinforcement (FI 30 s and FI 90 s), and between two levels of food deprivation. Computational modeling revealed that latencies and IRTs were well described by mixture probability distributions embodying two-state Markov chains. Analysis of these models revealed that only a subset of latencies is sensitive to the periodicity of reinforcement, and prefeeding only reduces the size of this subset. The distribution of IRTs suggests that behavior in FI schedules is organized in bouts that lengthen and ramp up in frequency with proximity to reinforcement. Prefeeding slowed down the lengthening of bouts and increased the time between bouts. When concatenated, latency and IRT models adequately reproduced sigmoidal FI response functions. These findings suggest that behavior in FI schedules fluctuates in and out of schedule control; an account of such fluctuation suggests that timing and motivation are dissociable components of FI performance. These mixture-distribution models also provide novel insights on the motivational, associative, and timing processes expressed in FI performance. These processes may be obscured, however, when performance in timing tasks is analyzed in terms of mean response rates.

  4. Entropy-Bayesian Inversion of Time-Lapse Tomographic GPR data for Monitoring Dielectric Permittivity and Soil Moisture Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Z; Terry, N; Hubbard, S S

    2013-02-12

    In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less

  5. Relaxation distribution function of intracellular dielectric zones as an indicator of tumorous transition of living cells.

    PubMed

    Thornton, B S; Hung, W T; Irving, J

    1991-01-01

    The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.

  6. Closing the equations of motion of anisotropic fluid dynamics by a judicious choice of a moment of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Molnár, E.; Niemi, H.; Rischke, D. H.

    2016-12-01

    In Molnár et al. Phys. Rev. D 93, 114025 (2016) the equations of anisotropic dissipative fluid dynamics were obtained from the moments of the Boltzmann equation based on an expansion around an arbitrary anisotropic single-particle distribution function. In this paper we make a particular choice for this distribution function and consider the boost-invariant expansion of a fluid in one dimension. In order to close the conservation equations, we need to choose an additional moment of the Boltzmann equation. We discuss the influence of the choice of this moment on the time evolution of fluid-dynamical variables and identify the moment that provides the best match of anisotropic fluid dynamics to the solution of the Boltzmann equation in the relaxation-time approximation.

  7. A versatile test for equality of two survival functions based on weighted differences of Kaplan-Meier curves.

    PubMed

    Uno, Hajime; Tian, Lu; Claggett, Brian; Wei, L J

    2015-12-10

    With censored event time observations, the logrank test is the most popular tool for testing the equality of two underlying survival distributions. Although this test is asymptotically distribution free, it may not be powerful when the proportional hazards assumption is violated. Various other novel testing procedures have been proposed, which generally are derived by assuming a class of specific alternative hypotheses with respect to the hazard functions. The test considered by Pepe and Fleming (1989) is based on a linear combination of weighted differences of the two Kaplan-Meier curves over time and is a natural tool to assess the difference of two survival functions directly. In this article, we take a similar approach but choose weights that are proportional to the observed standardized difference of the estimated survival curves at each time point. The new proposal automatically makes weighting adjustments empirically. The new test statistic is aimed at a one-sided general alternative hypothesis and is distributed with a short right tail under the null hypothesis but with a heavy tail under the alternative. The results from extensive numerical studies demonstrate that the new procedure performs well under various general alternatives with a caution of a minor inflation of the type I error rate when the sample size is small or the number of observed events is small. The survival data from a recent cancer comparative study are utilized for illustrating the implementation of the process. Copyright © 2015 John Wiley & Sons, Ltd.

  8. The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Kusmanoff, Antone; Martin, Nancy L.

    1989-01-01

    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.

  9. Wave generation by contaminant ions near a large spacecraft

    NASA Technical Reports Server (NTRS)

    Singh, N.

    1993-01-01

    Measurements from the space shuttle flights have revealed that a large spacecraft in a low earth orbit is accompanied by an extensive gas cloud which is primarily made up of water. The charge exchange between the water molecule and the ionospheric O(+) ions produces a water ion beam traversing downstream of the spacecraft. In this report we present results from a study on the generation of plasma waves by the interaction of the water ion beams with the ionospheric plasma. Since velocity distribution function is key to the understanding of the wave generation process, we have performed a test particle simulation to determine the nature of H2O(+) ions velocity distribution function. The simulations show that at the time scales shorter than the ion cyclotron period tau(sub c), the distribution function can be described by a beam. On the other hand, when the time scales are larger than tau(sub c), a ring distribution forms. A brief description of the linear instabilities driven by an ion beam streaming across a magnetic field in a plasma is presented. We have identified two types of instabilities occurring in low and high frequency bands; the low-frequency instability occurs over the frequency band from zero to about the lower hybrid frequency for a sufficiently low beam density. As the beam density increases, the linear instability occurs at decreasing frequencies below the lower-hybrid frequency. The high frequency instability occurs near the electron cyclotron frequency and its harmonics.

  10. Exact posterior computation in non-conjugate Gaussian location-scale parameters models

    NASA Astrophysics Data System (ADS)

    Andrade, J. A. A.; Rathie, P. N.

    2017-12-01

    In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.

  11. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  12. Boltzmann-conserving classical dynamics in quantum time-correlation functions: "Matsubara dynamics".

    PubMed

    Hele, Timothy J H; Willatt, Michael J; Muolo, Andrea; Althorpe, Stuart C

    2015-04-07

    We show that a single change in the derivation of the linearized semiclassical-initial value representation (LSC-IVR or "classical Wigner approximation") results in a classical dynamics which conserves the quantum Boltzmann distribution. We rederive the (standard) LSC-IVR approach by writing the (exact) quantum time-correlation function in terms of the normal modes of a free ring-polymer (i.e., a discrete imaginary-time Feynman path), taking the limit that the number of polymer beads N → ∞, such that the lowest normal-mode frequencies take their "Matsubara" values. The change we propose is to truncate the quantum Liouvillian, not explicitly in powers of ħ(2) at ħ(0) (which gives back the standard LSC-IVR approximation), but in the normal-mode derivatives corresponding to the lowest Matsubara frequencies. The resulting "Matsubara" dynamics is inherently classical (since all terms O(ħ(2)) disappear from the Matsubara Liouvillian in the limit N → ∞) and conserves the quantum Boltzmann distribution because the Matsubara Hamiltonian is symmetric with respect to imaginary-time translation. Numerical tests show that the Matsubara approximation to the quantum time-correlation function converges with respect to the number of modes and gives better agreement than LSC-IVR with the exact quantum result. Matsubara dynamics is too computationally expensive to be applied to complex systems, but its further approximation may lead to practical methods.

  13. Emulation of Industrial Control Field Device Protocols

    DTIC Science & Technology

    2013-03-01

    platforms such as the Arduino ( based on the Atmel AVR architecture) or popular PIC architecture based devices, which are programmed for specific functions...UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base , Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...confidence intervals for the mean. Based on these results, extensive knowledge of the specific implementations of the protocols or timing profiles of the

  14. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  15. Software Comparison for Renewable Energy Deployment in a Distribution Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian

    The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less

  16. A comparative study of mixture cure models with covariate

    NASA Astrophysics Data System (ADS)

    Leng, Oh Yit; Khalid, Zarina Mohd

    2017-05-01

    In survival analysis, the survival time is assumed to follow a non-negative distribution, such as the exponential, Weibull, and log-normal distributions. In some cases, the survival time is influenced by some observed factors. The absence of these observed factors may cause an inaccurate estimation in the survival function. Therefore, a survival model which incorporates the influences of observed factors is more appropriate to be used in such cases. These observed factors are included in the survival model as covariates. Besides that, there are cases where a group of individuals who are cured, that is, not experiencing the event of interest. Ignoring the cure fraction may lead to overestimate in estimating the survival function. Thus, a mixture cure model is more suitable to be employed in modelling survival data with the presence of a cure fraction. In this study, three mixture cure survival models are used to analyse survival data with a covariate and a cure fraction. The first model includes covariate in the parameterization of the susceptible individuals survival function, the second model allows the cure fraction to depend on covariate, and the third model incorporates covariate in both cure fraction and survival function of susceptible individuals. This study aims to compare the performance of these models via a simulation approach. Therefore, in this study, survival data with varying sample sizes and cure fractions are simulated and the survival time is assumed to follow the Weibull distribution. The simulated data are then modelled using the three mixture cure survival models. The results show that the three mixture cure models are more appropriate to be used in modelling survival data with the presence of cure fraction and an observed factor.

  17. Improving the efficiency of configurational-bias Monte Carlo: A density-guided method for generating bending angle trials for linear and branched molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu

    2014-08-21

    A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less

  18. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE PAGES

    Ku, S.; Hager, R.; Chang, C. S.; ...

    2016-04-01

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  19. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S.; Hager, R.; Chang, C. S.

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  20. A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.

    In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less

  1. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  2. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  3. Multiscale statistics of trajectories with applications to fluid particles in turbulence and football players

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Kadoch, Benjamin; Bos, Wouter

    2017-11-01

    The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.

  4. Serious games for elderly continuous monitoring.

    PubMed

    Lemus-Zúñiga, Lenin-G; Navarro-Pardo, Esperanza; Moret-Tatay, Carmen; Pocinho, Ricardo

    2015-01-01

    Information technology (IT) and serious games allow older population to remain independent for longer. Hence, when designing technology for this population, developmental changes, such as attention and/or perception, should be considered. For instance, a crucial developmental change has been related to cognitive speed in terms of reaction time (RT). However, this variable presents a skewed distribution that difficult data analysis. An alternative strategy is to characterize the data to an ex-Gaussian function. Furthermore, this procedure provides different parameters that have been related to underlying cognitive processes in the literature. Another issue to be considered is the optimal data recording, storing and processing. For that purpose mobile devices (smart phones and tablets) are a good option for targeting serious games where valuable information can be stored (time spent in the application, reaction time, frequency of use, and a long etcetera). The data stored inside the smartphones and tablets can be sent to a central computer (cloud storage) in order to store the data collected to not only fill the distribution of reaction times to mathematical functions, but also to estimate parameters which may reflect cognitive processes underlying language, aging, and decisional process.

  5. Statistical self-similarity of width function maxima with implications to floods

    USGS Publications Warehouse

    Veitzer, S.A.; Gupta, V.K.

    2001-01-01

    Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.

  6. The van Hove distribution function for Brownian hard spheres: Dynamical test particle theory and computer simulations for bulk dynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J.; Schmidt, Matthias

    2010-12-01

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.

  7. ANTICOOL: Simulating positron cooling and annihilation in atomic gases

    NASA Astrophysics Data System (ADS)

    Green, D. G.

    2018-03-01

    The Fortran program ANTICOOL, developed to simulate positron cooling and annihilation in atomic gases for positron energies below the positronium-formation threshold, is presented. Given positron-atom elastic scattering phase shifts, normalised annihilation rates Zeff, and γ spectra as a function of momentum k, ANTICOOL enables the calculation of the positron momentum distribution f(k , t) as a function of time t, the time-varying normalised annihilation rate Z¯eff(t) , the lifetime spectrum and time-varying annihilation γ spectra. The capability and functionality of the program is demonstrated via a tutorial-style example for positron cooling and annihilation in room temperature helium gas, using accurate scattering and annihilation cross sections and γ spectra calculated using many-body theory as input.

  8. Cardiac function and perfusion dynamics measured on a beat-by-beat basis in the live mouse using ultra-fast 4D optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel

    2015-03-01

    The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.

  9. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  10. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  11. 3D multimodal MRI brain glioma tumor and edema segmentation: a graph cut distribution matching approach.

    PubMed

    Njeh, Ines; Sallemi, Lamia; Ayed, Ismail Ben; Chtourou, Khalil; Lehericy, Stephane; Galanaud, Damien; Hamida, Ahmed Ben

    2015-03-01

    This study investigates a fast distribution-matching, data-driven algorithm for 3D multimodal MRI brain glioma tumor and edema segmentation in different modalities. We learn non-parametric model distributions which characterize the normal regions in the current data. Then, we state our segmentation problems as the optimization of several cost functions of the same form, each containing two terms: (i) a distribution matching prior, which evaluates a global similarity between distributions, and (ii) a smoothness prior to avoid the occurrence of small, isolated regions in the solution. Obtained following recent bound-relaxation results, the optima of the cost functions yield the complement of the tumor region or edema region in nearly real-time. Based on global rather than pixel wise information, the proposed algorithm does not require an external learning from a large, manually-segmented training set, as is the case of the existing methods. Therefore, the ensuing results are independent of the choice of a training set. Quantitative evaluations over the publicly available training and testing data set from the MICCAI multimodal brain tumor segmentation challenge (BraTS 2012) demonstrated that our algorithm yields a highly competitive performance for complete edema and tumor segmentation, among nine existing competing methods, with an interesting computing execution time (less than 0.5s per image). Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Fluid Intelligence Predicts Novel Rule Implementation in a Distributed Frontoparietal Control Network.

    PubMed

    Tschentscher, Nadja; Mitchell, Daniel; Duncan, John

    2017-05-03

    Fluid intelligence has been associated with a distributed cognitive control or multiple-demand (MD) network, comprising regions of lateral frontal, insular, dorsomedial frontal, and parietal cortex. Human fluid intelligence is also intimately linked to task complexity, and the process of solving complex problems in a sequence of simpler, more focused parts. Here, a complex target detection task included multiple independent rules, applied one at a time in successive task epochs. Although only one rule was applied at a time, increasing task complexity (i.e., the number of rules) impaired performance in participants of lower fluid intelligence. Accompanying this loss of performance was reduced response to rule-critical events across the distributed MD network. The results link fluid intelligence and MD function to a process of attentional focus on the successive parts of complex behavior. SIGNIFICANCE STATEMENT Fluid intelligence is intimately linked to the ability to structure complex problems in a sequence of simpler, more focused parts. We examine the basis for this link in the functions of a distributed frontoparietal or multiple-demand (MD) network. With increased task complexity, participants of lower fluid intelligence showed reduced responses to task-critical events. Reduced responses in the MD system were accompanied by impaired behavioral performance. Low fluid intelligence is linked to poor foregrounding of task-critical information across a distributed MD system. Copyright © 2017 Tschentscher et al.

  13. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  14. Distributed analysis functional testing using GangaRobot in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Legger, Federica; ATLAS Collaboration

    2011-12-01

    Automated distributed analysis tests are necessary to ensure smooth operations of the ATLAS grid resources. The HammerCloud framework allows for easy definition, submission and monitoring of grid test applications. Both functional and stress test applications can be defined in HammerCloud. Stress tests are large-scale tests meant to verify the behaviour of sites under heavy load. Functional tests are light user applications running at each site with high frequency, to ensure that the site functionalities are available at all times. Success or failure rates of these tests jobs are individually monitored. Test definitions and results are stored in a database and made available to users and site administrators through a web interface. In this work we present the recent developments of the GangaRobot framework. GangaRobot monitors the outcome of functional tests, creates a blacklist of sites failing the tests, and exports the results to the ATLAS Site Status Board (SSB) and to the Service Availability Monitor (SAM), providing on the one hand a fast way to identify systematic or temporary site failures, and on the other hand allowing for an effective distribution of the work load on the available resources.

  15. Load flow and state estimation algorithms for three-phase unbalanced power distribution systems

    NASA Astrophysics Data System (ADS)

    Madvesh, Chiranjeevi

    Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.

  16. Neural network disturbance observer-based distributed finite-time formation tracking control for multiple unmanned helicopters.

    PubMed

    Wang, Dandan; Zong, Qun; Tian, Bailing; Shao, Shikai; Zhang, Xiuyun; Zhao, Xinyi

    2018-02-01

    The distributed finite-time formation tracking control problem for multiple unmanned helicopters is investigated in this paper. The control object is to maintain the positions of follower helicopters in formation with external interferences. The helicopter model is divided into a second order outer-loop subsystem and a second order inner-loop subsystem based on multiple-time scale features. Using radial basis function neural network (RBFNN) technique, we first propose a novel finite-time multivariable neural network disturbance observer (FMNNDO) to estimate the external disturbance and model uncertainty, where the neural network (NN) approximation errors can be dynamically compensated by adaptive law. Next, based on FMNNDO, a distributed finite-time formation tracking controller and a finite-time attitude tracking controller are designed using the nonsingular fast terminal sliding mode (NFTSM) method. In order to estimate the second derivative of the virtual desired attitude signal, a novel finite-time sliding mode integral filter is designed. Finally, Lyapunov analysis and multiple-time scale principle ensure the realization of control goal in finite-time. The effectiveness of the proposed FMNNDO and controllers are then verified by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Modeling correlated bursts by the bursty-get-burstier mechanism

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun

    2017-12-01

    Temporal correlations of time series or event sequences in natural and social phenomena have been characterized by power-law decaying autocorrelation functions with decaying exponent γ . Such temporal correlations can be understood in terms of power-law distributed interevent times with exponent α and/or correlations between interevent times. The latter, often called correlated bursts, has recently been studied by measuring power-law distributed bursty trains with exponent β . A scaling relation between α and γ has been established for the uncorrelated interevent times, while little is known about the effects of correlated interevent times on temporal correlations. In order to study these effects, we devise the bursty-get-burstier model for correlated bursts, by which one can tune the degree of correlations between interevent times, while keeping the same interevent time distribution. We numerically find that sufficiently strong correlations between interevent times could violate the scaling relation between α and γ for the uncorrelated case. A nontrivial dependence of γ on β is also found for some range of α . The implication of our results is discussed in terms of the hierarchical organization of bursty trains at various time scales.

  18. Biogeography of time partitioning in mammals.

    PubMed

    Bennie, Jonathan J; Duffy, James P; Inger, Richard; Gaston, Kevin J

    2014-09-23

    Many animals regulate their activity over a 24-h sleep-wake cycle, concentrating their peak periods of activity to coincide with the hours of daylight, darkness, or twilight, or using different periods of light and darkness in more complex ways. These behavioral differences, which are in themselves functional traits, are associated with suites of physiological and morphological adaptations with implications for the ecological roles of species. The biogeography of diel time partitioning is, however, poorly understood. Here, we document basic biogeographic patterns of time partitioning by mammals and ecologically relevant large-scale patterns of natural variation in "illuminated activity time" constrained by temperature, and we determine how well the first of these are predicted by the second. Although the majority of mammals are nocturnal, the distributions of diurnal and crepuscular species richness are strongly associated with the availability of biologically useful daylight and twilight, respectively. Cathemerality is associated with relatively long hours of daylight and twilight in the northern Holarctic region, whereas the proportion of nocturnal species is highest in arid regions and lowest at extreme high altitudes. Although thermal constraints on activity have been identified as key to the distributions of organisms, constraints due to functional adaptation to the light environment are less well studied. Global patterns in diversity are constrained by the availability of the temporal niche; disruption of these constraints by the spread of artificial lighting and anthropogenic climate change, and the potential effects on time partitioning, are likely to be critical influences on species' future distributions.

  19. Time-dependent resilience assessment and improvement of urban infrastructure systems

    NASA Astrophysics Data System (ADS)

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  20. Time-dependent resilience assessment and improvement of urban infrastructure systems.

    PubMed

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  1. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  2. A unified Bayesian semiparametric approach to assess discrimination ability in survival analysis

    PubMed Central

    Zhao, Lili; Feng, Dai; Chen, Guoan; Taylor, Jeremy M.G.

    2015-01-01

    Summary The discriminatory ability of a marker for censored survival data is routinely assessed by the time-dependent ROC curve and the c-index. The time-dependent ROC curve evaluates the ability of a biomarker to predict whether a patient lives past a particular time t. The c-index measures the global concordance of the marker and the survival time regardless of the time point. We propose a Bayesian semiparametric approach to estimate these two measures. The proposed estimators are based on the conditional distribution of the survival time given the biomarker and the empirical biomarker distribution. The conditional distribution is estimated by a linear dependent Dirichlet process mixture model. The resulting ROC curve is smooth as it is estimated by a mixture of parametric functions. The proposed c-index estimator is shown to be more efficient than the commonly used Harrell's c-index since it uses all pairs of data rather than only informative pairs. The proposed estimators are evaluated through simulations and illustrated using a lung cancer dataset. PMID:26676324

  3. Shapiro effect as a possible cause of the low-frequency pulsar timing noise in globular clusters

    NASA Astrophysics Data System (ADS)

    Larchenkova, T. I.; Kopeikin, S. M.

    2006-01-01

    A prolonged timing of millisecond pulsars has revealed low-frequency uncorrelated (infrared) noise, presumably of astrophysical origin, in the pulse arrival time (PAT) residuals for some of them. Currently available pulsar timing methods allow the statistical parameters of this noise to be reliably measured by decomposing the PAT residual function into orthogonal Fourier harmonics. In most cases, pulsars in globular clusters show a low-frequency modulation of their rotational phase and spin rate. The relativistic time delay of the pulsar signal in the curved spacetime of randomly distributed and moving globular cluster stars (the Shapiro effect) is suggested as a possible cause of this modulation. Extremely important (from an astrophysical point of view) information about the structure of the globular cluster core, which is inaccessible to study by other observational methods, could be obtained by analyzing the spectral parameters of the low-frequency noise caused by the Shapiro effect and attributable to the random passages of stars near the line of sight to the pulsar. Given the smallness of the aberration corrections that arise from the nonstationarity of the gravitational field of the randomly distributed ensemble of stars under consideration, a formula is derived for the Shapiro effect for a pulsar in a globular cluster. The derived formula is used to calculate the autocorrelation function of the low-frequency pulsar noise, the slope of its power spectrum, and the behavior of the σz statistic that characterizes the spectral properties of this noise in the form of a time function. The Shapiro effect under discussion is shown to manifest itself for large impact parameters as a low-frequency noise of the pulsar spin rate with a spectral index of n = -1.8 that depends weakly on the specific model distribution of stars in the globular cluster. For small impact parameters, the spectral index of the noise is n = -1.5.

  4. Simple reaction time in 8-9-year old children environmentally exposed to PCBs.

    PubMed

    Šovčíková, Eva; Wimmerová, Soňa; Strémy, Maximilián; Kotianová, Janette; Loffredo, Christopher A; Murínová, Ľubica Palkovičová; Chovancová, Jana; Čonka, Kamil; Lancz, Kinga; Trnovec, Tomáš

    2015-12-01

    Simple reaction time (SRT) has been studied in children exposed to polychlorinated biphenyls (PCBs), with variable results. In the current work we examined SRT in 146 boys and 161 girls, aged 8.53 ± 0.65 years (mean ± SD), exposed to PCBs in the environment of eastern Slovakia. We divided the children into tertiles with regard to increasing PCB serum concentration. The mean ± SEM serum concentration of the sum of 15 PCB congeners was 191.15 ± 5.39, 419.23 ± 8.47, and 1315.12 ± 92.57 ng/g lipids in children of the first, second, and third tertiles, respectively. We created probability distribution plots for each child from their multiple trials of the SRT testing. We fitted response time distributions from all valid trials with the ex-Gaussian function, a convolution of a normal and an additional exponential function, providing estimates of three independent parameters μ, σ, and τ. μ is the mean of the normal component, σ is the standard deviation of the normal component, and τ is the mean of the exponential component. Group response time distributions were calculated using the Vincent averaging technique. A Q-Q plot comparing probability distribution of the first vs. third tertile indicated that deviation of the quantiles of the latter tertile from those of the former begins at the 40th percentile and does not show a positive acceleration. This was confirmed in comparison of the ex-Gaussian parameters of these two tertiles adjusted for sex, age, Raven IQ of the child, mother's and father's education, behavior at home and school, and BMI: the results showed that the parameters μ and τ significantly (p ≤ 0.05) increased with PCB exposure. Similar increases of the ex-Gaussian parameter τ in children suffering from ADHD have been previously reported and interpreted as intermittent attentional lapses, but were not seen in our cohort. Our study has confirmed that environmental exposure of children to PCBs is associated with prolongation of simple reaction time reflecting impairment of cognitive functions. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. A Distributed Representation of Remembered Time

    DTIC Science & Technology

    2015-11-19

    hippocampus , time, and memory across scales. Journal of Experimental Psychology: General., 142(4), 1211-30. doi: 10.1037/a0033621 Howard, M. W...The hippocampus , time, and memory across scales. Journal of Experimental Psychology: General., 142(4), 1211-30. doi: 10.1037/a0033621 Howard, M. W...accomplished this goal by developing a computational framework that describes a wide range of functional cellular correlates in the hippocampus and

  6. Discrete linear canonical transforms based on dilated Hermite functions.

    PubMed

    Pei, Soo-Chang; Lai, Yun-Chiu

    2011-08-01

    Linear canonical transform (LCT) is very useful and powerful in signal processing and optics. In this paper, discrete LCT (DLCT) is proposed to approximate LCT by utilizing the discrete dilated Hermite functions. The Wigner distribution function is also used to investigate DLCT performances in the time-frequency domain. Compared with the existing digital computation of LCT, our proposed DLCT possess additivity and reversibility properties with no oversampling involved. In addition, the length of input/output signals will not be changed before and after the DLCT transformations, which is consistent with the time-frequency area-preserving nature of LCT; meanwhile, the proposed DLCT has very good approximation of continuous LCT.

  7. Development of confidence limits by pivotal functions for estimating software reliability

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.

  8. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  9. Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model

    PubMed Central

    NING, JING; QIN, JING; SHEN, YU

    2014-01-01

    SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727

  10. New concept of the contraction-extension property of the left ventricular myocardium.

    PubMed

    Tanaka, Motonao; Sakamoto, Tsuguya; Sugawara, Shigeo; Katahira, Yoshiaki; Tabuchi, Haruna; Nakajima, Hiroyuki; Kurokawa, Takafumi; Kanai, Hiroshi; Hasegawa, Hideyuki; Ohtsuki, Shigeo

    2014-04-01

    Using newly developed ultrasonic technology, we attempted to disclose the characteristics of the left ventricular (LV) contraction-extension (C-E) property, which has an important relationship to LV function. Strain rate (SR) distribution within the posterior wall and interventricular septum was microscopically measured with a high accuracy of 821μm in spatial resolution by using the phase difference tracking method. The subjects were 10 healthy men (aged 30-50 years). The time course of the SR distribution disclosed the characteristic C-E property, i.e. the contraction started from the apex and propagated toward the base on one hand, and from the epicardial side toward the endocardial side on the other hand. Therefore, the contraction of one area and the extension of another area simultaneously appeared through nearly the whole cardiac cycle, with the contracting part positively extending the latter part and vice versa. The time course of these propagations gave rise to the peristalsis and the bellows action of the LV wall, and both contributed to effective LV function. The LV contraction started coinciding in time with the P wave of the electrocardiogram, and the cardiac cycle was composed of 4 phases, including 2 types of transitional phase, as well as the ejection phase and slow filling phase. The sum of the measurement time duration of either the contraction or the extension process occupied nearly equal duration in normal conditions. The newly developed ultrasonic technology revealed that the SR distribution was important in evaluating the C-E property of the LV myocardium. The harmonious succession of the 4 cardiac phases newly identified seemed to be helpful in understanding the mechanism to keep long-lasting pump function of the LV. Copyright © 2013 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  11. Second-order Boltzmann equation: gauge dependence and gauge invariance

    NASA Astrophysics Data System (ADS)

    Naruko, Atsushi; Pitrou, Cyril; Koyama, Kazuya; Sasaki, Misao

    2013-08-01

    In the context of cosmological perturbation theory, we derive the second-order Boltzmann equation describing the evolution of the distribution function of radiation without a specific gauge choice. The essential steps in deriving the Boltzmann equation are revisited and extended given this more general framework: (i) the polarization of light is incorporated in this formalism by using a tensor-valued distribution function; (ii) the importance of a choice of the tetrad field to define the local inertial frame in the description of the distribution function is emphasized; (iii) we perform a separation between temperature and spectral distortion, both for the intensity and polarization for the first time; (iv) the gauge dependence of all perturbed quantities that enter the Boltzmann equation is derived, and this enables us to check the correctness of the perturbed Boltzmann equation by explicitly showing its gauge-invariance for both intensity and polarization. We finally discuss several implications of the gauge dependence for the observed temperature.

  12. Modeling chloride transport using travel time distributions at Plynlimon, Wales

    NASA Astrophysics Data System (ADS)

    Benettin, Paolo; Kirchner, James W.; Rinaldo, Andrea; Botter, Gianluca

    2015-05-01

    Here we present a theoretical interpretation of high-frequency, high-quality tracer time series from the Hafren catchment at Plynlimon in mid-Wales. We make use of the formulation of transport by travel time distributions to model chloride transport originating from atmospheric deposition and compute catchment-scale travel time distributions. The relevance of the approach lies in the explanatory power of the chosen tools, particularly to highlight hydrologic processes otherwise clouded by the integrated nature of the measured outflux signal. The analysis reveals the key role of residual storages that are poorly visible in the hydrological response, but are shown to strongly affect water quality dynamics. A significant accuracy in reproducing data is shown by our calibrated model. A detailed representation of catchment-scale travel time distributions has been derived, including the time evolution of the overall dispersion processes (which can be expressed in terms of time-varying storage sampling functions). Mean computed travel times span a broad range of values (from 80 to 800 days) depending on the catchment state. Results also suggest that, in the average, discharge waters are younger than storage water. The model proves able to capture high-frequency fluctuations in the measured chloride concentrations, which are broadly explained by the sharp transition between groundwaters and faster flows originating from topsoil layers. This article was corrected on 22 JUN 2015. See the end of the full text for details.

  13. Bayesian explorations of fault slip evolution over the earthquake cycle

    NASA Astrophysics Data System (ADS)

    Duputel, Z.; Jolivet, R.; Benoit, A.; Gombert, B.

    2017-12-01

    The ever-increasing amount of geophysical data continuously opens new perspectives on fundamental aspects of the seismogenic behavior of active faults. In this context, the recent fleet of SAR satellites including Sentinel-1 and COSMO-SkyMED permits the use of InSAR for time-dependent slip modeling with unprecedented resolution in time and space. However, existing time-dependent slip models rely on spatial smoothing regularization schemes, which can produce unrealistically smooth slip distributions. In addition, these models usually do not include uncertainty estimates thereby reducing the utility of such estimates. Here, we develop an entirely new approach to derive probabilistic time-dependent slip models. This Markov-Chain Monte Carlo method involves a series of transitional steps to predict and update posterior Probability Density Functions (PDFs) of slip as a function of time. We assess the viability of our approach using various slow-slip event scenarios. Using a dense set of SAR images, we also use this method to quantify the spatial distribution and temporal evolution of slip along a creeping segment of the North Anatolian Fault. This allows us to track a shallow aseismic slip transient lasting for about a month with a maximum slip of about 2 cm.

  14. An operating system for future aerospace vehicle computer systems

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  15. Trace element distribution in the rat cerebellum

    NASA Astrophysics Data System (ADS)

    Kwiatek, W. M.; Long, G. J.; Pounds, J. G.; Reuhl, K. R.; Hanson, A. L.; Jones, K. W.

    1990-04-01

    Spatial distributions and concentrations of trace elements (TE) in the brain are important because TE perform catalytic and structural functions in enzymes which regulate brain function and development. We have investigated the distributions of TE in rat cerebellum. Structures were sectioned and analyzed by the Synchrotron Radiation Induced X-ray Emission (SRIXE) method using the NSLS X-26 white-light microprobe facility. Advantages important for TE analysis of biological specimens with X-ray microscopy include short time of measurement, high brightness and flux, good spatial resolution, multielemental detection, good sensitivity, and nondestructive irradiation. Trace elements were measured in thin rat brain sections of 20 μm thickness. The analyses were performed on sample volumes as small as 0.2 nl with Minimum Detectable Limits (MDL) of 50 ppb wet weight for Fe, 100 ppb wet weight for Cu, and Zn, and 1 ppm wet weight for Pb. The distribution of TE in the molecular cell layer, granule cell layer and fiber tract of rat cerebella was investigated. Both point analyses and two-dimensional semiquantitative mapping of the TE distribution in a section were used. All analyzed elements were observed in each structure of the cerebellum except mercury which was not observed in granule cell layer or fiber tract. This approach permits an exacting correlation of the TE distribution in complex structure with the diet, toxic elements, and functional status of the animal.

  16. Phytoplankton pigment patterns and wind forcing off central California

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.; Barksdale, Brett

    1991-01-01

    Mesoscale variability in phytoplankton pigment distributions of central California during the spring-summer upwelling season are studied via a 4-yr time series of high-resolution coastal zone color scanner imagery. Empirical orthogonal functions are used to decompose the time series of spatial images into its dominant modes of variability. The coupling between wind forcing of the upper ocean and phytoplankton distribution on mesoscales is investigated. Wind forcing, in particular the curl of the wind stress, was found to play an important role in the distribution of phytoplankton pigment in the California Current. The spring transition varies in timing and intensity from year to year but appears to be a recurrent feature associated with the rapid onset of the upwelling-favorable winds. Although the underlying dynamics may be dominated by processes other than forcing by wind stress curl, it appears that curl may force the variability of the filaments and hence the pigment patterns.

  17. Fast Gated EPR Imaging of the Beating Heart: Spatiotemporally-Resolved 3D Imaging of Free Radical Distribution during the Cardiac Cycle

    PubMed Central

    Chen, Zhiyu; Reyes, Levy A.; Johnson, David H.; Velayutham, Murugesan; Yang, Changjun; Samouilov, Alexandre; Zweier, Jay L.

    2012-01-01

    In vivo or ex vivo electron paramagnetic resonance imaging (EPRI) is a powerful technique for determining the spatial distribution of free radicals and other paramagnetic species in living organs and tissues. However, applications of EPRI have been limited by long projection acquisition times and the consequent fact that rapid gated EPRI was not possible. Hence in vivo EPRI typically provided only time-averaged information. In order to achieve direct gated EPRI, a fast EPR acquisition scheme was developed to decrease EPR projection acquisition time down to 10 – 20 ms, along with corresponding software and instrumentation to achieve fast gated EPRI of the isolated beating heart with submillimeter spatial resolution in as little as 2 to 3 minutes. Reconstructed images display temporal and spatial variations of the free radical distribution, anatomical structure, and contractile function within the rat heart during the cardiac cycle. PMID:22473660

  18. Baldovin-Stella stochastic volatility process and Wiener process mixtures

    NASA Astrophysics Data System (ADS)

    Peirano, P. P.; Challet, D.

    2012-08-01

    Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.

  19. Overview of Sea-Ice Properties, Distribution and Temporal Variations, for Application to Ice-Atmosphere Chemical Processes.

    NASA Astrophysics Data System (ADS)

    Moritz, R. E.

    2005-12-01

    The properties, distribution and temporal variation of sea-ice are reviewed for application to problems of ice-atmosphere chemical processes. Typical vertical structure of sea-ice is presented for different ice types, including young ice, first-year ice and multi-year ice, emphasizing factors relevant to surface chemistry and gas exchange. Time average annual cycles of large scale variables are presented, including ice concentration, ice extent, ice thickness and ice age. Spatial and temporal variability of these large scale quantities is considered on time scales of 1-50 years, emphasizing recent and projected changes in the Arctic pack ice. The amount and time evolution of open water and thin ice are important factors that influence ocean-ice-atmosphere chemical processes. Observations and modeling of the sea-ice thickness distribution function are presented to characterize the range of variability in open water and thin ice.

  20. Distributed dynamic simulations of networked control and building performance applications.

    PubMed

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

Top