Sample records for mixed exponential distribution

  1. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  2. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  3. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  4. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  5. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  6. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  7. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  8. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  9. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  10. Apparent power-law distributions in animal movements can arise from intraspecific interactions

    PubMed Central

    Breed, Greg A.; Severns, Paul M.; Edwards, Andrew M.

    2015-01-01

    Lévy flights have gained prominence for analysis of animal movement. In a Lévy flight, step-lengths are drawn from a heavy-tailed distribution such as a power law (PL), and a large number of empirical demonstrations have been published. Others, however, have suggested that animal movement is ill fit by PL distributions or contend a state-switching process better explains apparent Lévy flight movement patterns. We used a mix of direct behavioural observations and GPS tracking to understand step-length patterns in females of two related butterflies. We initially found movement in one species (Euphydryas editha taylori) was best fit by a bounded PL, evidence of a Lévy flight, while the other (Euphydryas phaeton) was best fit by an exponential distribution. Subsequent analyses introduced additional candidate models and used behavioural observations to sort steps based on intraspecific interactions (interactions were rare in E. phaeton but common in E. e. taylori). These analyses showed a mixed-exponential is favoured over the bounded PL for E. e. taylori and that when step-lengths were sorted into states based on the influence of harassing conspecific males, both states were best fit by simple exponential distributions. The direct behavioural observations allowed us to infer the underlying behavioural mechanism is a state-switching process driven by intraspecific interactions rather than a Lévy flight. PMID:25519992

  11. The effect of convective boundary condition on MHD mixed convection boundary layer flow over an exponentially stretching vertical sheet

    NASA Astrophysics Data System (ADS)

    Isa, Siti Suzilliana Putri Mohamed; Arifin, Norihan Md.; Nazar, Roslinda; Bachok, Norfifah; Ali, Fadzilah Md

    2017-12-01

    A theoretical study that describes the magnetohydrodynamic mixed convection boundary layer flow with heat transfer over an exponentially stretching sheet with an exponential temperature distribution has been presented herein. This study is conducted in the presence of convective heat exchange at the surface and its surroundings. The system is controlled by viscous dissipation and internal heat generation effects. The governing nonlinear partial differential equations are converted into ordinary differential equations by a similarity transformation. The converted equations are then solved numerically using the shooting method. The results related to skin friction coefficient, local Nusselt number, velocity and temperature profiles are presented for several sets of values of the parameters. The effects of the governing parameters on the features of the flow and heat transfer are examined in detail in this study.

  12. Weblog patterns and human dynamics with decreasing interest

    NASA Astrophysics Data System (ADS)

    Guo, J.-L.; Fan, C.; Guo, Z.-H.

    2011-06-01

    In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.

  13. Quark mixing and exponential form of the Cabibbo-Kobayashi-Maskawa matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhukovsky, K. V., E-mail: zhukovsk@phys.msu.ru; Dattoli, D., E-mail: dattoli@frascati.enea.i

    2008-10-15

    Various forms of representation of the mixing matrix are discussed. An exponential parametrization e{sup A} of the Cabibbo-Kobayashi-Maskawa matrix is considered in the context of the unitarity requirement, this parametrization being the most general form of the mixing matrix. An explicit representation for the exponential mixing matrix in terms of the first and second degrees of the matrix A exclusively is obtained. This representation makes it possible to calculate this exponential mixing matrix readily in any order of the expansion in the small parameter {lambda}. The generation of new unitary parametric representations of the mixing matrix with the aid ofmore » the exponential matrix is demonstrated.« less

  14. Exponential lag function projective synchronization of memristor-based multidirectional associative memory neural networks via hybrid control

    NASA Astrophysics Data System (ADS)

    Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao

    2018-03-01

    This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.

  15. Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems

    NASA Astrophysics Data System (ADS)

    Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen

    2017-06-01

    In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.

  16. Human mobility in space from three modes of public transportation

    NASA Astrophysics Data System (ADS)

    Jiang, Shixiong; Guan, Wei; Zhang, Wenyi; Chen, Xu; Yang, Liu

    2017-10-01

    The human mobility patterns have drew much attention from researchers for decades, considering about its importance for urban planning and traffic management. In this study, the taxi GPS trajectories, smart card transaction data of subway and bus from Beijing are utilized to model human mobility in space. The original datasets are cleaned and processed to attain the displacement of each trip according to the origin and destination locations. Then, the Akaike information criterion is adopted to screen out the best fitting distribution for each mode from candidate ones. The results indicate that displacements of taxi trips follow the exponential distribution. Besides, the exponential distribution also fits displacements of bus trips well. However, their exponents are significantly different. Displacements of subway trips show great specialties and can be well fitted by the gamma distribution. It is obvious that human mobility of each mode is different. To explore the overall human mobility, the three datasets are mixed up to form a fusion dataset according to the annual ridership proportions. Finally, the fusion displacements follow the power-law distribution with an exponential cutoff. It is innovative to combine different transportation modes to model human mobility in the city.

  17. Spectral Gap Estimates in Mean Field Spin Glasses

    NASA Astrophysics Data System (ADS)

    Ben Arous, Gérard; Jagannath, Aukosh

    2018-05-01

    We show that mixing for local, reversible dynamics of mean field spin glasses is exponentially slow in the low temperature regime. We introduce a notion of free energy barriers for the overlap, and prove that their existence imply that the spectral gap is exponentially small, and thus that mixing is exponentially slow. We then exhibit sufficient conditions on the equilibrium Gibbs measure which guarantee the existence of these barriers, using the notion of replicon eigenvalue and 2D Guerra Talagrand bounds. We show how these sufficient conditions cover large classes of Ising spin models for reversible nearest-neighbor dynamics and spherical models for Langevin dynamics. Finally, in the case of Ising spins, Panchenko's recent rigorous calculation (Panchenko in Ann Probab 46(2):865-896, 2018) of the free energy for a system of "two real replica" enables us to prove a quenched LDP for the overlap distribution, which gives us a wider criterion for slow mixing directly related to the Franz-Parisi-Virasoro approach (Franz et al. in J Phys I 2(10):1869-1880, 1992; Kurchan et al. J Phys I 3(8):1819-1838, 1993). This condition holds in a wider range of temperatures.

  18. Anomalous Diffusion in a Trading Model

    NASA Astrophysics Data System (ADS)

    Khidzir, Sidiq Mohamad; Wan Abdullah, Wan Ahmad Tajuddin

    2009-07-01

    The result of the trading model by Chakrabarti et al. [1] is the wealth distribution with a mixed exponential and power law distribution. Based on the motivation of studying the dynamics behind the flow of money similar to work done by Brockmann [2, 3] we track the flow of money in this trading model to observe anomalous diffusion in the form of long waiting times and Levy Flights.

  19. The exponential parameterization of the neutrino mixing matrix as an SU(3) group element and an account for new experimental data

    NASA Astrophysics Data System (ADS)

    Zhukovsky, K. V.

    2017-09-01

    The exponential form of the Pontecorvo-Maki-Nakagawa-Sakata mixing matrix for neutrinos is considered in the context of the fundamental representation of the SU(3) group. The logarithm of the mixing matrix is obtained. Based on the most recent experimental data on neutrino mixing, the exact values of the entries of the exponential matrix are calculated. The exact values for its real and imaginary parts are determined, respectively, in charge of the mixing without CP violation and of the pure CP violation effect. The hypothesis of complementarity for quarks and neutrinos is confirmed. The factorization of the exponential mixing matrix, which allows the separation of the mixing and of the CP violation itself in the form of the product of rotations around the real and imaginary axes, is demonstrated.

  20. Cross diffusion and exponential space dependent heat source impacts in radiated three-dimensional (3D) flow of Casson fluid by heated surface

    NASA Astrophysics Data System (ADS)

    Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.

    2018-03-01

    This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.

  1. Global exponential stability and lag synchronization for delayed memristive fuzzy Cohen-Grossberg BAM neural networks with impulses.

    PubMed

    Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar

    2018-02-01

    This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Magnetic pattern at supergranulation scale: the void size distribution

    NASA Astrophysics Data System (ADS)

    Berrilli, F.; Scardigli, S.; Del Moro, D.

    2014-08-01

    The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.

  3. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.

  4. Effects of turbulent hyporheic mixing on reach-scale solute transport

    NASA Astrophysics Data System (ADS)

    Roche, K. R.; Li, A.; Packman, A. I.

    2017-12-01

    Turbulence rapidly mixes solutes and fine particles into coarse-grained streambeds. Both hyporheic exchange rates and spatial variability of hyporheic mixing are known to be controlled by turbulence, but it is unclear how turbulent mixing influences mass transport at the scale of stream reaches. We used a process-based particle-tracking model to simulate local- and reach-scale solute transport for a coarse-bed stream. Two vertical mixing profiles, one with a smooth transition from in-stream to hyporheic transport conditions and a second with enhanced turbulent transport at the sediment-water interface, were fit to steady-state subsurface concentration profiles observed in laboratory experiments. The mixing profile with enhanced interfacial transport better matched the observed concentration profiles and overall mass retention in the streambed. The best-fit mixing profiles were then used to simulate upscaled solute transport in a stream. Enhanced mixing coupled in-stream and hyporheic solute transport, causing solutes exchanged into the shallow subsurface to have travel times similar to the water column. This extended the exponential region of the in-stream solute breakthrough curve, and delayed the onset of the heavy power-law tailing induced by deeper and slower hyporheic porewater velocities. Slopes of observed power-law tails were greater than those predicted from stochastic transport theory, and also changed in time. In addition, rapid hyporheic transport velocities truncated the hyporheic residence time distribution by causing mass to exit the stream reach via subsurface advection, yielding strong exponential tempering in the in-stream breakthrough curves at the timescale of advective hyporheic transport through the reach. These results show that strong turbulent mixing across the sediment-water interface violates the conventional separation of surface and subsurface flows used in current models for solute transport in rivers. Instead, the full distribution of flow and mixing over the surface-subsurface continuum must be explicitly considered to properly interpret solute transport in coarse-bed streams.

  5. Determining the turnover time of groundwater systems with the aid of environmental tracers. 1. Models and their applicability

    NASA Astrophysics Data System (ADS)

    Małoszewski, P.; Zuber, A.

    1982-06-01

    Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.

  6. Improving deep convolutional neural networks with mixed maxout units.

    PubMed

    Zhao, Hui-Zhen; Liu, Fu-Xian; Li, Long-Yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that "non-maximal features are unable to deliver" and "feature mapping subspace pooling is insufficient," we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance.

  7. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  8. A mathematical model for generating bipartite graphs and its application to protein networks

    NASA Astrophysics Data System (ADS)

    Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.

    2009-12-01

    Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.

  9. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  10. Referee Networks and Their Spectral Properties

    NASA Astrophysics Data System (ADS)

    Slanina, F.; Zhang, Y.-Ch.

    2005-09-01

    The bipartite graph connecting products and reviewers of that product is studied empirically in the case of amazon.com. We find that the network has power-law degree distribution on the side of reviewers, while on the side of products the distribution is better fitted by stretched exponential. The spectrum of normalised adjacency matrix shows power-law tail in the density of states. Establishing the community structures by finding localised eigenstates is not straightforward as the localised and delocalised states are mixed throughout the whole support of the spectrum.

  11. Periodicity and global exponential stability of generalized Cohen-Grossberg neural networks with discontinuous activations and mixed delays.

    PubMed

    Wang, Dongshu; Huang, Lihong

    2014-03-01

    In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Testing mixing models of old and young groundwater in a tropical lowland rain forest with environmental tracers

    NASA Astrophysics Data System (ADS)

    Solomon, D. Kip; Genereux, David P.; Plummer, L. Niel; Busenberg, Eurybiades

    2010-04-01

    We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC-11, CFC-12, CFC-113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near-zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.

  13. A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard; Attele, Rohan; Koshak, William

    2011-01-01

    A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.

  14. The social architecture of capitalism

    NASA Astrophysics Data System (ADS)

    Wright, Ian

    2005-02-01

    A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.

  15. Improving deep convolutional neural networks with mixed maxout units

    PubMed Central

    Liu, Fu-xian; Li, Long-yue

    2017-01-01

    Motivated by insights from the maxout-units-based deep Convolutional Neural Network (CNN) that “non-maximal features are unable to deliver” and “feature mapping subspace pooling is insufficient,” we present a novel mixed variant of the recently introduced maxout unit called a mixout unit. Specifically, we do so by calculating the exponential probabilities of feature mappings gained by applying different convolutional transformations over the same input and then calculating the expected values according to their exponential probabilities. Moreover, we introduce the Bernoulli distribution to balance the maximum values with the expected values of the feature mappings subspace. Finally, we design a simple model to verify the pooling ability of mixout units and a Mixout-units-based Network-in-Network (NiN) model to analyze the feature learning ability of the mixout models. We argue that our proposed units improve the pooling ability and that mixout models can achieve better feature learning and classification performance. PMID:28727737

  16. Bacterial genomes lacking long-range correlations may not be modeled by low-order Markov chains: the role of mixing statistics and frame shift of neighboring genes.

    PubMed

    Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian

    2014-12-01

    We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Exponential Mixing of the 3D Stochastic Navier-Stokes Equations Driven by Mildly Degenerate Noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albeverio, Sergio; Debussche, Arnaud, E-mail: arnaud.debussche@bretagne.ens-cachan.fr; Xu Lihu, E-mail: Lihu.Xu@brunel.ac.uk

    2012-10-15

    We prove the strong Feller property and exponential mixing for 3D stochastic Navier-Stokes equation driven by mildly degenerate noises (i.e. all but finitely many Fourier modes being forced) via a Kolmogorov equation approach.

  18. A mixing evolution model for bidirectional microblog user networks

    NASA Astrophysics Data System (ADS)

    Yuan, Wei-Guo; Liu, Yun

    2015-08-01

    Microblogs have been widely used as a new form of online social networking. Based on the user profile data collected from Sina Weibo, we find that the number of microblog user bidirectional friends approximately corresponds with the lognormal distribution. We then build two microblog user networks with real bidirectional relationships, both of which have not only small-world and scale-free but also some special properties, such as double power-law degree distribution, disassortative network, hierarchical and rich-club structure. Moreover, by detecting the community structures of the two real networks, we find both of their community scales follow an exponential distribution. Based on the empirical analysis, we present a novel evolution network model with mixed connection rules, including lognormal fitness preferential and random attachment, nearest neighbor interconnected in the same community, and global random associations in different communities. The simulation results show that our model is consistent with real network in many topology features.

  19. Testing mixing models of old and young groundwater in a tropical lowland rain forest with environmental tracers

    USGS Publications Warehouse

    Solomon, D. Kip; Genereux, David P.; Plummer, Niel; Busenberg, Eurybiades

    2010-01-01

    We tested three models of mixing between old interbasin groundwater flow (IGF) and young, locally derived groundwater in a lowland rain forest in Costa Rica using a large suite of environmental tracers. We focus on the young fraction of water using the transient tracers CFC‐11, CFC‐12, CFC‐113, SF6, 3H, and bomb 14C. We measured 3He, but 3H/3He dating is generally problematic due to the presence of mantle 3He. Because of their unique concentration histories in the atmosphere, combinations of transient tracers are sensitive not only to subsurface travel times but also to mixing between waters having different travel times. Samples fall into three distinct categories: (1) young waters that plot along a piston flow line, (2) old samples that have near‐zero concentrations of the transient tracers, and (3) mixtures of 1 and 2. We have modeled the concentrations of the transient tracers using (1) a binary mixing model (BMM) of old and young water with the young fraction transported via piston flow, (2) an exponential mixing model (EMM) with a distribution of groundwater travel times characterized by a mean value, and (3) an exponential mixing model for the young fraction followed by binary mixing with an old fraction (EMM/BMM). In spite of the mathematical differences in the mixing models, they all lead to a similar conceptual model of young (0 to 10 year) groundwater that is locally derived mixing with old (>1000 years) groundwater that is recharged beyond the surface water boundary of the system.

  20. Lithium ion dynamics in Li2S+GeS2+GeO2 glasses studied using (7)Li NMR field-cycling relaxometry and line-shape analysis.

    PubMed

    Gabriel, Jan; Petrov, Oleg V; Kim, Youngsik; Martin, Steve W; Vogel, Michael

    2015-09-01

    We use (7)Li NMR to study the ionic jump motion in ternary 0.5Li2S+0.5[(1-x)GeS2+xGeO2] glassy lithium ion conductors. Exploring the "mixed glass former effect" in this system led to the assumption of a homogeneous and random variation of diffusion barriers in this system. We exploit that combining traditional line-shape analysis with novel field-cycling relaxometry, it is possible to measure the spectral density of the ionic jump motion in broad frequency and temperature ranges and, thus, to determine the distribution of activation energies. Two models are employed to parameterize the (7)Li NMR data, namely, the multi-exponential autocorrelation function model and the power-law waiting times model. Careful evaluation of both of these models indicates a broadly inhomogeneous energy landscape for both the single (x=0.0) and the mixed (x=0.1) network former glasses. The multi-exponential autocorrelation function model can be well described by a Gaussian distribution of activation barriers. Applicability of the methods used and their sensitivity to microscopic details of ionic motion are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Modeling of mixing processes: Fluids, particulates, and powders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ottino, J.M.; Hansen, S.

    Work under this grant involves two main areas: (1) Mixing of Viscous Liquids, this first area comprising aggregation, fragmentation and dispersion, and (2) Mixing of Powders. In order to produce a coherent self-contained picture, we report primarily on results obtained under (1), and within this area, mostly on computational studies of particle aggregation in regular and chaotic flows. Numerical simulations show that the average cluster size of compact clusters grows algebraically, while the average cluster size of fractal clusters grows exponentially; companion mathematical arguments are used to describe the initial growth of average cluster size and polydispersity. It is foundmore » that when the system is well mixed and the capture radius independent of mass, the polydispersity is constant for long-times and the cluster size distribution is self-similar. Furthermore, our simulations indicate that the fractal nature of the clusters is dependent upon the mixing.« less

  2. {phi} meson production in Au + Au and p + p collisions at {radical}s{sub NN}=200 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, J.; Adler, C.; Aggarwal, M.M.

    2004-06-01

    We report the STAR measurement of {psi} meson production in Au + Au and p + p collisions at {radical}s{sub NN} = 200 GeV. Using the event mixing technique, the {psi} spectra and yields are obtained at midrapidity for five centrality bins in Au+Au collisions and for non-singly-diffractive p+p collisions. It is found that the {psi} transverse momentum distributions from Au+Au collisions are better fitted with a single-exponential while the p+p spectrum is better described by a double-exponential distribution. The measured nuclear modification factors indicate that {psi} production in central Au+Au collisions is suppressed relative to peripheral collisions when scaledmore » by the number of binary collisions (). The systematics of versus centrality and the constant {psi}/K{sup -} ratio versus beam species, centrality, and collision energy rule out kaon coalescence as the dominant mechanism for {psi} production.« less

  3. Individual and group dynamics in purchasing activity

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Guo, Jin-Li; Fan, Chao; Liu, Xue-Jiao

    2013-01-01

    As a major part of the daily operation in an enterprise, purchasing frequency is in constant change. Recent approaches on the human dynamics can provide some new insights into the economic behavior of companies in the supply chain. This paper captures the attributes of creation times of purchase orders to an individual vendor, as well as to all vendors, and further investigates whether they have some kind of dynamics by applying logarithmic binning to the construction of distribution plots. It’s found that the former displays a power-law distribution with approximate exponent 2.0, while the latter is fitted by a mixture distribution with both power-law and exponential characteristics. Obviously, two distinctive characteristics are presented for the interval time distribution from the perspective of individual dynamics and group dynamics. Actually, this mixing feature can be attributed to the fitting deviations as they are negligible for individual dynamics, but those of different vendors are cumulated and then lead to an exponential factor for group dynamics. To better describe the mechanism generating the heterogeneity of the purchase order assignment process from the objective company to all its vendors, a model driven by product life cycle is introduced, and then the analytical distribution and the simulation result are obtained, which are in good agreement with the empirical data.

  4. In situ observations of snow particle size distributions over a cold frontal rainband within an extratropical cyclone

    NASA Astrophysics Data System (ADS)

    Yang, Jiefan; Lei, Hengchi

    2016-02-01

    Cloud microphysical properties of a mixed phase cloud generated by a typical extratropical cyclone in the Tongliao area, Inner Mongolia on 3 May 2014, are analyzed primarily using in situ flight observation data. This study is mainly focused on ice crystal concentration, supercooled cloud water content, and vertical distributions of fit parameters of snow particle size distributions (PSDs). The results showed several discrepancies of microphysical properties obtained during two penetrations. During penetration within precipitating cloud, the maximum ice particle concentration, liquid water content, and ice water content were increased by a factor of 2-3 compared with their counterpart obtained during penetration of a nonprecipitating cloud. The heavy rimed and irregular ice crystals obtained by 2D imagery probe as well as vertical distributions of fitting parameters within precipitating cloud show that the ice particles grow during falling via riming and aggregation process, whereas the lightly rimed and pristine ice particles as well as fitting parameters within non-precipitating cloud indicate the domination of sublimation process. During the two cloud penetrations, the PSDs were generally better represented by gamma distributions than the exponential form in terms of the determining coefficient ( R 2). The correlations between parameters of exponential /gamma form within two penetrations showed no obvious differences compared with previous studies.

  5. A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction

    NASA Technical Reports Server (NTRS)

    Smith, Samantha A.; DelGenio, Anthony D.

    1998-01-01

    A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.

  6. TracerLPM (Version 1): An Excel® workbook for interpreting groundwater age distributions from environmental tracer data

    USGS Publications Warehouse

    Jurgens, Bryant C.; Böhlke, J.K.; Eberts, Sandra M.

    2012-01-01

    TracerLPM is an interactive Excel® (2007 or later) workbook program for evaluating groundwater age distributions from environmental tracer data by using lumped parameter models (LPMs). Lumped parameter models are mathematical models of transport based on simplified aquifer geometry and flow configurations that account for effects of hydrodynamic dispersion or mixing within the aquifer, well bore, or discharge area. Five primary LPMs are included in the workbook: piston-flow model (PFM), exponential mixing model (EMM), exponential piston-flow model (EPM), partial exponential model (PEM), and dispersion model (DM). Binary mixing models (BMM) can be created by combining primary LPMs in various combinations. Travel time through the unsaturated zone can be included as an additional parameter. TracerLPM also allows users to enter age distributions determined from other methods, such as particle tracking results from numerical groundwater-flow models or from other LPMs not included in this program. Tracers of both young groundwater (anthropogenic atmospheric gases and isotopic substances indicating post-1940s recharge) and much older groundwater (carbon-14 and helium-4) can be interpreted simultaneously so that estimates of the groundwater age distribution for samples with a wide range of ages can be constrained. TracerLPM is organized to permit a comprehensive interpretive approach consisting of hydrogeologic conceptualization, visual examination of data and models, and best-fit parameter estimation. Groundwater age distributions can be evaluated by comparing measured and modeled tracer concentrations in two ways: (1) multiple tracers analyzed simultaneously can be evaluated against each other for concordance with modeled concentrations (tracer-tracer application) or (2) tracer time-series data can be evaluated for concordance with modeled trends (tracer-time application). Groundwater-age estimates can also be obtained for samples with a single tracer measurement at one point in time; however, prior knowledge of an appropriate LPM is required because the mean age is often non-unique. LPM output concentrations depend on model parameters and sample date. All of the LPMs have a parameter for mean age. The EPM, PEM, and DM have an additional parameter that characterizes the degree of age mixing in the sample. BMMs have a parameter for the fraction of the first component in the mixture. An LPM, together with its parameter values, provides a description of the age distribution or the fractional contribution of water for every age of recharge contained within a sample. For the PFM, the age distribution is a unit pulse at one distinct age. For the other LPMs, the age distribution can be much broader and span decades, centuries, millennia, or more. For a sample with a mixture of groundwater ages, the reported interpretation of tracer data includes the LPM name, the mean age, and the values of any other independent model parameters. TracerLPM also can be used for simulating the responses of wells, springs, streams, or other groundwater discharge receptors to nonpoint-source contaminants that are introduced in recharge, such as nitrate. This is done by combining an LPM or user-defined age distribution with information on contaminant loading at the water table. Information on historic contaminant loading can be used to help evaluate a model's ability to match real world conditions and understand observed contaminant trends, while information on future contaminant loading scenarios can be used to forecast potential contaminant trends.

  7. Nonlinear dynamic evolution and control in CCFN with mixed attachment mechanisms

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Jianping; Han, Dun

    2017-01-01

    In recent years, wireless communication plays an important role in our lives. Cooperative communication, is used by a mobile station with single antenna to share with each other forming a virtual MIMO antenna system, will become a development with a diversity gain for wireless communication in tendency future. In this paper, a fitness model of evolution network based on complex networks with mixed attachment mechanisms is devised in order to study an actual network-CCFN (cooperative communication fitness network). Firstly, the evolution of CCFN is given by four cases with different probabilities, and the rate equations of nodes degree are presented to analyze the evolution of CCFN. Secondly, the degree distribution is analyzed by calculating the rate equation and numerical simulation with the examples of four fitness distributions such as power law, uniform fitness distribution, exponential fitness distribution and Rayleigh fitness distribution. Finally, the robustness of CCFN is studied by numerical simulation with four fitness distributions under random attack and intentional attack to analyze the effects of degree distribution, average path length and average degree. The results of this paper offers insights for building CCFN systems in order to program communication resources.

  8. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  9. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  10. A Simulation of the ECSS Help Desk with the Erlang a Model

    DTIC Science & Technology

    2011-03-01

    a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au

  11. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  12. On the gap between an empirical distribution and an exponential distribution of waiting times for price changes in a financial market

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya

    2007-03-01

    We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.

  13. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  14. Statistical steady states in turbulent droplet condensation

    NASA Astrophysics Data System (ADS)

    Bec, Jeremie; Krstulovic, Giorgio; Siewert, Christoph

    2017-11-01

    We investigate the general problem of turbulent condensation. Using direct numerical simulations we show that the fluctuations of the supersaturation field offer different conditions for the growth of droplets which evolve in time due to turbulent transport and mixing. This leads to propose a Lagrangian stochastic model consisting of a set of integro-differential equations for the joint evolution of the squared radius and the supersaturation along droplet trajectories. The model has two parameters fixed by the total amount of water and the thermodynamic properties, as well as the Lagrangian integral timescale of the turbulent supersaturation. The model reproduces very well the droplet size distributions obtained from direct numerical simulations and their time evolution. A noticeable result is that, after a stage where the squared radius simply diffuses, the system converges exponentially fast to a statistical steady state independent of the initial conditions. The main mechanism involved in this convergence is a loss of memory induced by a significant number of droplets undergoing a complete evaporation before growing again. The statistical steady state is characterised by an exponential tail in the droplet mass distribution.

  15. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian

    2018-03-01

    The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions.

  16. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  17. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  18. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  19. Stability of Tsallis entropy and instabilities of Rényi and normalized Tsallis entropies: a basis for q-exponential distributions.

    PubMed

    Abe, Sumiyoshi

    2002-10-01

    The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.

  20. A Test of the Exponential Distribution for Stand Structure Definition in Uneven-aged Loblolly-Shortleaf Pine Stands

    Treesearch

    Paul A. Murphy; Robert M. Farrar

    1981-01-01

    In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.

  1. Scale dependence of entrainment-mixing mechanisms in cumulus clouds

    DOE PAGES

    Lu, Chunsong; Liu, Yangang; Niu, Shengjie; ...

    2014-12-17

    This work empirically examines the dependence of entrainment-mixing mechanisms on the averaging scale in cumulus clouds using in situ aircraft observations during the Routine Atmospheric Radiation Measurement Aerial Facility Clouds with Low Optical Water Depths Optical Radiative Observations (RACORO) field campaign. A new measure of homogeneous mixing degree is defined that can encompass all types of mixing mechanisms. Analysis of the dependence of the homogenous mixing degree on the averaging scale shows that, on average, the homogenous mixing degree decreases with increasing averaging scales, suggesting that apparent mixing mechanisms gradually approach from homogeneous mixing to extreme inhomogeneous mixing with increasingmore » scales. The scale dependence can be well quantified by an exponential function, providing first attempt at developing a scale-dependent parameterization for the entrainment-mixing mechanism. The influences of three factors on the scale dependence are further examined: droplet-free filament properties (size and fraction), microphysical properties (mean volume radius and liquid water content of cloud droplet size distributions adjacent to droplet-free filaments), and relative humidity of entrained dry air. It is found that the decreasing rate of homogeneous mixing degree with increasing averaging scales becomes larger with larger droplet-free filament size and fraction, larger mean volume radius and liquid water content, or higher relative humidity. The results underscore the necessity and possibility of considering averaging scale in representation of entrainment-mixing processes in atmospheric models.« less

  2. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  3. Does the Australian desert ant Melophorus bagoti approximate a Lévy search by an intrinsic bi-modal walk?

    PubMed

    Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken

    2014-01-07

    We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.

  4. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  5. Measurement of Zeta-Potential at Microchannel Wall by a Nanoscale Laser Induced Fluorescence Imaging

    NASA Astrophysics Data System (ADS)

    Kazoe, Yutaka; Sato, Yohei

    A nanoscale laser induced fluorescence imaging was proposed by using fluorescent dye and the evanescent wave with total internal reflection of a laser beam. The present study focused on the two-dimensional measurement of zeta-potential at the microchannel wall, which is an electrostatic potential at the wall surface and a dominant parameter of electroosmotic flow. The evanescent wave, which decays exponentially from the wall, was used as an excitation light of the fluorescent dye. The fluorescent intensity detected by a CCD camera is closely related to the zeta-potential. Two kinds of fluorescent dye solution at different ionic concentrations were injected into a T-shaped microchannel, and formed a mixing flow field in the junction area. The two-dimensional distribution of zeta-potential at the microchannel wall in the pressure-driven flow field was measured. The obtained zeta-potential distribution has a transverse gradient toward the mixing flow field and was changed by the difference in the averaged velocity of pressure-driven flow. To understand the ion motion in the mixing flow field, the three-dimensional flow structure was analyzed by the velocity measurement using micron-resolution particle image velocimetry and the numerical simulation. It is concluded that the two-dimensional distribution of zeta-potential at the microchannel wall was dependent on the ion motion in the flow field, which was governed by the convection and molecular diffusion.

  6. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    NASA Astrophysics Data System (ADS)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.

  7. Dynamics of transit times and StorAge Selection functions in four forested catchments from stable isotope data

    NASA Astrophysics Data System (ADS)

    Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian

    2017-04-01

    Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.

  8. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  9. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  10. Decay of random correlation functions for unimodal maps

    NASA Astrophysics Data System (ADS)

    Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique

    2000-10-01

    Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.

  11. A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems

    NASA Astrophysics Data System (ADS)

    Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.

    2010-09-01

    We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.

  12. Persistence of exponential bed thickness distributions in the stratigraphic record: Experiments and theory

    NASA Astrophysics Data System (ADS)

    Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.

    2010-12-01

    Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.

  13. Chemical Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Aquino, T.; Dentz, M.

    2017-12-01

    Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.

  14. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  15. Stretched exponential distributions in nature and economy: ``fat tails'' with characteristic scales

    NASA Astrophysics Data System (ADS)

    Laherrère, J.; Sornette, D.

    1998-04-01

    To account quantitatively for many reported "natural" fat tail distributions in Nature and Economy, we propose the stretched exponential family as a complement to the often used power law distributions. It has many advantages, among which to be economical with only two adjustable parameters with clear physical interpretation. Furthermore, it derives from a simple and generic mechanism in terms of multiplicative processes. We show that stretched exponentials describe very well the distributions of radio and light emissions from galaxies, of US GOM OCS oilfield reserve sizes, of World, US and French agglomeration sizes, of country population sizes, of daily Forex US-Mark and Franc-Mark price variations, of Vostok (near the south pole) temperature variations over the last 400 000 years, of the Raup-Sepkoski's kill curve and of citations of the most cited physicists in the world. We also discuss its potential for the distribution of earthquake sizes and fault displacements. We suggest physical interpretations of the parameters and provide a short toolkit of the statistical properties of the stretched exponentials. We also provide a comparison with other distributions, such as the shifted linear fractal, the log-normal and the recently introduced parabolic fractal distributions.

  16. On the distributions of annual and seasonal daily rainfall extremes in central Arizona and their spatial variability

    NASA Astrophysics Data System (ADS)

    Mascaro, Giuseppe

    2018-04-01

    This study uses daily rainfall records of a dense network of 240 gauges in central Arizona to gain insights on (i) the variability of the seasonal distributions of rainfall extremes; (ii) how the seasonal distributions affect the shape of the annual distribution; and (iii) the presence of spatial patterns and orographic control for these distributions. For this aim, recent methodological advancements in peak-over-threshold analysis and application of the Generalized Pareto Distribution (GPD) were used to assess the suitability of the GPD hypothesis and improve the estimation of its parameters, while limiting the effect of short sample sizes. The distribution of daily rainfall extremes was found to be heavy-tailed (i.e., GPD shape parameter ξ > 0) during the summer season, dominated by convective monsoonal thunderstorms. The exponential distribution (a special case of GPD with ξ = 0) was instead showed to be appropriate for modeling wintertime daily rainfall extremes, mainly caused by cold fronts transported by westerly flow. The annual distribution exhibited a mixed behavior, with lighter upper tails than those found in summer. A hybrid model mixing the two seasonal distributions was demonstrated capable of reproducing the annual distribution. Organized spatial patterns, mainly controlled by elevation, were observed for the GPD scale parameter, while ξ did not show any clear control of location or orography. The quantiles returned by the GPD were found to be very similar to those provided by the National Oceanic and Atmospheric Administration (NOAA) Atlas 14, which used the Generalized Extreme Value (GEV) distribution. Results of this work are useful to improve statistical modeling of daily rainfall extremes at high spatial resolution and provide diagnostic tools for assessing the ability of climate models to simulate extreme events.

  17. Effects of clustered transmission on epidemic growth Comment on "Mathematical models to characterize early epidemic growth: A review" by Gerardo Chowell et al.

    NASA Astrophysics Data System (ADS)

    Merler, Stefano

    2016-09-01

    Characterizing the early growth profile of an epidemic outbreak is key for predicting the likely trajectory of the number of cases and for designing adequate control measures. Epidemic profiles characterized by exponential growth have been widely observed in the past and a grounding theoretical framework for the analysis of infectious disease dynamics was provided by the pioneering work of Kermack and McKendrick [1]. In particular, exponential growth stems from the assumption that pathogens spread in homogeneous mixing populations; that is, individuals of the population mix uniformly and randomly with each other. However, this assumption was readily recognized as highly questionable [2], and sub-exponential profiles of epidemic growth have been observed in a number of epidemic outbreaks, including HIV/AIDS, foot-and-mouth disease, measles and, more recently, Ebola [3,4].

  18. A demographic study of the exponential distribution applied to uneven-aged forests

    Treesearch

    Jeffrey H. Gove

    2016-01-01

    A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...

  19. Exponentiated power Lindley distribution.

    PubMed

    Ashour, Samir K; Eltehiwy, Mahmoud A

    2015-11-01

    A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.

  20. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  1. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  2. New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays.

    PubMed

    Zhang, Guodong; Zeng, Zhigang; Hu, Junhao

    2018-01-01

    This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  4. A model of canopy photosynthesis incorporating protein distribution through the canopy and its acclimation to light, temperature and CO2

    PubMed Central

    Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce

    2010-01-01

    Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273

  5. Relationship between Item Responses of Negative Affect Items and the Distribution of the Sum of the Item Scores in the General Population

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Several studies have shown that total depressive symptom scores in the general population approximate an exponential pattern, except for the lower end of the distribution. The Center for Epidemiologic Studies Depression Scale (CES-D) consists of 20 items, each of which may take on four scores: “rarely,” “some,” “occasionally,” and “most of the time.” Recently, we reported that the item responses for 16 negative affect items commonly exhibit exponential patterns, except for the level of “rarely,” leading us to hypothesize that the item responses at the level of “rarely” may be related to the non-exponential pattern typical of the lower end of the distribution. To verify this hypothesis, we investigated how the item responses contribute to the distribution of the sum of the item scores. Methods Data collected from 21,040 subjects who had completed the CES-D questionnaire as part of a Japanese national survey were analyzed. To assess the item responses of negative affect items, we used a parameter r, which denotes the ratio of “rarely” to “some” in each item response. The distributions of the sum of negative affect items in various combinations were analyzed using log-normal scales and curve fitting. Results The sum of the item scores approximated an exponential pattern regardless of the combination of items, whereas, at the lower end of the distributions, there was a clear divergence between the actual data and the predicted exponential pattern. At the lower end of the distributions, the sum of the item scores with high values of r exhibited higher scores compared to those predicted from the exponential pattern, whereas the sum of the item scores with low values of r exhibited lower scores compared to those predicted. Conclusions The distributional pattern of the sum of the item scores could be predicted from the item responses of such items. PMID:27806132

  6. Lump solutions and interaction phenomenon to the third-order nonlinear evolution equation

    NASA Astrophysics Data System (ADS)

    Kofane, T. C.; Fokou, M.; Mohamadou, A.; Yomba, E.

    2017-11-01

    In this work, the lump solution and the kink solitary wave solution from the (2 + 1) -dimensional third-order evolution equation, using the Hirota bilinear method are obtained through symbolic computation with Maple. We have assumed that the lump solution is centered at the origin, when t = 0 . By considering a mixing positive quadratic function with exponential function, as well as a mixing positive quadratic function with hyperbolic cosine function, interaction solutions like lump-exponential and lump-hyperbolic cosine are presented. A completely non-elastic interaction between a lump and kink soliton is observed, showing that a lump solution can be swallowed by a kink soliton.

  7. Dynamic autoinoculation and the microbial ecology of a deep water hydrocarbon irruption

    PubMed Central

    Valentine, David L.; Mezić, Igor; Maćešić, Senka; Črnjarić-Žic, Nelida; Ivić, Stefan; Hogan, Patrick J.; Fonoberov, Vladimir A.; Loire, Sophie

    2012-01-01

    The irruption of gas and oil into the Gulf of Mexico during the Deepwater Horizon event fed a deep sea bacterial bloom that consumed hydrocarbons in the affected waters, formed a regional oxygen anomaly, and altered the microbiology of the region. In this work, we develop a coupled physical–metabolic model to assess the impact of mixing processes on these deep ocean bacterial communities and their capacity for hydrocarbon and oxygen use. We find that observed biodegradation patterns are well-described by exponential growth of bacteria from seed populations present at low abundance and that current oscillation and mixing processes played a critical role in distributing hydrocarbons and associated bacterial blooms within the northeast Gulf of Mexico. Mixing processes also accelerated hydrocarbon degradation through an autoinoculation effect, where water masses, in which the hydrocarbon irruption had caused blooms, later returned to the spill site with hydrocarbon-degrading bacteria persisting at elevated abundance. Interestingly, although the initial irruption of hydrocarbons fed successive blooms of different bacterial types, subsequent irruptions promoted consistency in the structure of the bacterial community. These results highlight an impact of mixing and circulation processes on biodegradation activity of bacteria during the Deepwater Horizon event and suggest an important role for mixing processes in the microbial ecology of deep ocean environments. PMID:22233808

  8. Anomalous yet Brownian.

    PubMed

    Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve

    2009-09-08

    We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.

  9. Lamination and mixing in laminar flows driven by Lorentz body forces

    NASA Astrophysics Data System (ADS)

    Rossi, L.; Doorly, D.; Kustrin, D.

    2012-01-01

    We present a new approach to the design of mixers. This approach relies on a sequence of tailored flows coupled with a new procedure to quantify the local degree of striation, called lamination. Lamination translates to the distance over which the molecular diffusion needs to act to finalise mixing. A novel in situ mixing is achieved by the tailored sequence of flows. This sequence is shown with the property that material lines and lamination grow exponentially, according to processes akin to the well-known baker's map. The degree of mixing (stirring coefficient) likewise shows exponential growth before the saturation of the stirring rate. Such saturation happens when the typical striations' thickness is smaller than the diffusion's length scale. Moreover, without molecular diffusion, the predicted striations' thickness would be smaller than the size of an atom of hydrogen within 40 flow turnover times. In fact, we conclude that about 3 minutes, i.e. 15 turnover times, are sufficient to mix species with very low diffusivities, e.g. suspensions of virus, bacteria, human cells, and DNA.

  10. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  11. Global exponential stability of octonion-valued neural networks with leakage delay and mixed delays.

    PubMed

    Popa, Călin-Adrian

    2018-06-08

    This paper discusses octonion-valued neural networks (OVNNs) with leakage delay, time-varying delays, and distributed delays, for which the states, weights, and activation functions belong to the normed division algebra of octonions. The octonion algebra is a nonassociative and noncommutative generalization of the complex and quaternion algebras, but does not belong to the category of Clifford algebras, which are associative. In order to avoid the nonassociativity of the octonion algebra and also the noncommutativity of the quaternion algebra, the Cayley-Dickson construction is used to decompose the OVNNs into 4 complex-valued systems. By using appropriate Lyapunov-Krasovskii functionals, with double and triple integral terms, the free weighting matrix method, and simple and double integral Jensen inequalities, delay-dependent criteria are established for the exponential stability of the considered OVNNs. The criteria are given in terms of complex-valued linear matrix inequalities, for two types of Lipschitz conditions which are assumed to be satisfied by the octonion-valued activation functions. Finally, two numerical examples illustrate the feasibility, effectiveness, and correctness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-05-01

    MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.

  13. Pattern analysis of total item score and item response of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative sample of US adults

    PubMed Central

    Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.

    2017-01-01

    Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560

  14. Investigation of non-Gaussian effects in the Brazilian option market

    NASA Astrophysics Data System (ADS)

    Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.

    2018-04-01

    An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.

  15. Calculating Formulae of Proportion Factor and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang

    2016-07-01

    Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.

  16. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  17. Echo Statistics of Aggregations of Scatterers in a Random Waveguide: Application to Biologic Sonar Clutter

    DTIC Science & Technology

    2012-09-01

    used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,

  18. The size distribution of Pacific Seamounts

    NASA Astrophysics Data System (ADS)

    Smith, Deborah K.; Jordan, Thomas H.

    1987-11-01

    An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.

  19. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  20. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  1. Parametric resonant triad interactions in a free shear layer

    NASA Technical Reports Server (NTRS)

    Mallier, R.; Maslowe, S. A.

    1993-01-01

    We investigate the weakly nonlinear evolution of a triad of nearly-neutral modes superimposed on a mixing layer with velocity profile u bar equals Um + tanh y. The perturbation consists of a plane wave and a pair of oblique waves each inclined at approximately 60 degrees to the mean flow direction. Because the evolution occurs on a relatively fast time scale, the critical layer dynamics dominate the process and the amplitude evolution of the oblique waves is governed by an integro-differential equation. The long-time solution of this equation predicts very rapid (exponential of an exponential) amplification and we discuss the pertinence of this result to vortex pairing phenomena in mixing layers.

  2. Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.

    PubMed

    Joo, Harry; Aguinis, Herman; Bradley, Kyle J

    2017-07-01

    We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Rainfall continuous time stochastic simulation for a wet climate in the Cantabric Coast

    NASA Astrophysics Data System (ADS)

    Rebole, Juan P.; Lopez, Jose J.; Garcia-Guzman, Adela

    2010-05-01

    Rain is the result of a series of complex atmospheric processes which are influenced by numerous factors. This complexity makes its simulation practically unfeasible from a physical basis, advising the use of stochastic diagrams. These diagrams, which are based on observed characteristics (Todorovic and Woolhiser, 1975), allow the introduction of renewal alternating processes, that account for the occurrence of rainfall at different time lapses (Markov chains are a particular case, where lapses can be described by exponential distributions). Thus, a sequential rainfall process can be defined as a temporal series in which rainfall events (periods in which rainfall is recorded) alternate with non rain events (periods in which no rainfall is recorded). The variables of a temporal rain sequence have been characterized (duration of the rainfall event, duration of the non rainfall event, average intensity of the rain in the rain event, and a temporal distribution of the amount of rain in the rain event) in a wet climate such as that of the coastal area of Guipúzcoa. The study has been performed from two series recorded at the meteorological stations of Igueldo-San Sebastián and Fuenterrabia / Airport (data every ten minutes and for its hourly aggregation). As a result of this work, the variables satisfactorily fitted the following distribution functions: the duration of the rain event to a exponential function; the duration of the dry event to a truncated exponential mixed distribution; the average intensity to a Weibull distribution; and the distribution of the rain fallen to the Beta distribution. The characterization was made for an hourly aggregation of the recorded interval of ten minutes. The parameters of the fitting functions were better obtained by means of the maximum likelihood method than the moment method. The parameters obtained from the characterization were used to develop a stochastic rainfall process simulation model by means of a three states Markov chain (Hutchinson, 1990), performed in an hourly basis by García-Guzmán (1993) and Castro et al. (1997, 2005 ). Simulation process results were valid in the hourly case for all the four described variables, with a slightly better response in Fuenterrabia than in Igueldo. In summary, all the variables were better simulated in Fuenterrabia than in Igueldo. Fuenterrabia data series is shorter and with longer sequences without missing data, compared to Igueldo. The latter shows higher number of missing data events, whereas its mean duration is longer in Fuenterrabia.

  4. The correlated k-distribution technique as applied to the AVHRR channels

    NASA Technical Reports Server (NTRS)

    Kratz, David P.

    1995-01-01

    Correlated k-distributions have been created to account for the molecular absorption found in the spectral ranges of the five Advanced Very High Resolution Radiometer (AVHRR) satellite channels. The production of the k-distributions was based upon an exponential-sum fitting of transmissions (ESFT) technique which was applied to reference line-by-line absorptance calculations. To account for the overlap of spectral features from different molecular species, the present routines made use of the multiplication transmissivity property which allows for considerable flexibility, especially when altering relative mixing ratios of the various molecular species. To determine the accuracy of the correlated k-distribution technique as compared to the line-by-line procedure, atmospheric flux and heating rate calculations were run for a wide variety of atmospheric conditions. For the atmospheric conditions taken into consideration, the correlated k-distribution technique has yielded results within about 0.5% for both the cases where the satellite spectral response functions were applied and where they were not. The correlated k-distribution's principal advantages is that it can be incorporated directly into multiple scattering routines that consider scattering as well as absorption by clouds and aerosol particles.

  5. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  6. Universality in stochastic exponential growth.

    PubMed

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  7. Universality in Stochastic Exponential Growth

    NASA Astrophysics Data System (ADS)

    Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.

    2014-07-01

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  8. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  9. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  10. Emergence of power-law in a market with mixed models

    NASA Astrophysics Data System (ADS)

    Ali Saif, M.; Gade, Prashant M.

    2007-10-01

    We investigate the problem of wealth distribution from the viewpoint of asset exchange. Robust nature of Pareto's law across economies, ideologies and nations suggests that this could be an outcome of trading strategies. However, the simple asset exchange models fail to reproduce this feature. A Yardsale (YS) model in which amount put on the bet is a fraction of minimum of the two players leads to condensation of wealth in hands of some agent while theft and fraud (TF) model in which the amount to be exchanged is a fraction of loser's wealth leads to an exponential distribution of wealth. We show that if we allow few agents to follow a different model than others, i.e., there are some agents following TF model while rest follow YS model, it leads to distribution with power-law tails. Similar effect is observed when one carries out transactions for a fraction of one's wealth using TF model and for the rest YS model is used. We also observe a power-law tail in wealth distribution if we allow the agents to follow either of the models with some probability.

  11. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  12. Power law incidence rate in epidemic models. Comment on: "Mathematical models to characterize early epidemic growth: A review" by Gerardo Chowell et al.

    NASA Astrophysics Data System (ADS)

    Allen, Linda J. S.

    2016-09-01

    Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,

  13. Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays.

    PubMed

    Huang, Haiying; Du, Qiaosheng; Kang, Xibing

    2013-11-01

    In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.

  14. The shock waves in decaying supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Smith, M. D.; Mac Low, M.-M.; Zuev, J. M.

    2000-04-01

    We here analyse numerical simulations of supersonic, hypersonic and magnetohydrodynamic turbulence that is free to decay. Our goals are to understand the dynamics of the decay and the characteristic properties of the shock waves produced. This will be useful for interpretation of observations of both motions in molecular clouds and sources of non-thermal radiation. We find that decaying hypersonic turbulence possesses an exponential tail of fast shocks and an exponential decay in time, i.e. the number of shocks is proportional to t exp (-ktv) for shock velocity jump v and mean initial wavenumber k. In contrast to the velocity gradients, the velocity Probability Distribution Function remains Gaussian with a more complex decay law. The energy is dissipated not by fast shocks but by a large number of low Mach number shocks. The power loss peaks near a low-speed turn-over in an exponential distribution. An analytical extension of the mapping closure technique is able to predict the basic decay features. Our analytic description of the distribution of shock strengths should prove useful for direct modeling of observable emission. We note that an exponential distribution of shocks such as we find will, in general, generate very low excitation shock signatures.

  15. Universal patterns of inequality

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Yakovenko, Victor M.

    2010-07-01

    Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.

  16. Study on probability distributions for evolution in modified extremal optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian

    2010-05-01

    It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.

  17. A fractal process of hydrogen diffusion in a-Si:H with exponential energy distribution

    NASA Astrophysics Data System (ADS)

    Hikita, Harumi; Ishikawa, Hirohisa; Morigaki, Kazuo

    2017-04-01

    Hydrogen diffusion in a-Si:H with exponential distribution of the states in energy exhibits the fractal structure. It is shown that a probability P(t) of the pausing time t has a form of tα (α: fractal dimension). It is shown that the fractal dimension α = Tr/T0 (Tr: hydrogen temperature, T0: a temperature corresponding to the width of exponential distribution of the states in energy) is in agreement with the Hausdorff dimension. A fractal graph for the case of α ≤ 1 is like the Cantor set. A fractal graph for the case of α > 1 is like the Koch curves. At α = ∞, hydrogen migration exhibits Brownian motion. Hydrogen diffusion in a-Si:H should be the fractal process.

  18. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  19. Numerical and machine learning simulation of parametric distributions of groundwater residence time in streams and wells

    NASA Astrophysics Data System (ADS)

    Starn, J. J.; Belitz, K.; Carlson, C.

    2017-12-01

    Groundwater residence-time distributions (RTDs) are critical for assessing susceptibility of water resources to contamination. This novel approach for estimating regional RTDs was to first simulate groundwater flow using existing regional digital data sets in 13 intermediate size watersheds (each an average of 7,000 square kilometers) that are representative of a wide range of glacial systems. RTDs were simulated with particle tracking. We refer to these models as "general models" because they are based on regional, as opposed to site-specific, digital data. Parametric RTDs were created from particle RTDs by fitting 1- and 2-component Weibull, gamma, and inverse Gaussian distributions, thus reducing a large number of particle travel times to 3 to 7 parameters (shape, location, and scale for each component plus a mixing fraction) for each modeled area. The scale parameter of these distributions is related to the mean exponential age; the shape parameter controls departure from the ideal exponential distribution and is partly a function of interaction with bedrock and with drainage density. Given the flexible shape and mathematical similarity of these distributions, any of them are potentially a good fit to particle RTDs. The 1-component gamma distribution provided a good fit to basin-wide particle RTDs. RTDs at monitoring wells and streams often have more complicated shapes than basin-wide RTDs, caused in part by heterogeneity in the model, and generally require 2-component distributions. A machine learning model was trained on the RTD parameters using features derived from regionally available watershed characteristics such as recharge rate, material thickness, and stream density. RTDs appeared to vary systematically across the landscape in relation to watershed features. This relation was used to produce maps of useful metrics with respect to risk-based thresholds, such as the time to first exceedance, time to maximum concentration, time above the threshold (exposure time), and the time until last exceedance; thus, the parameters of groundwater residence time are measures of the intrinsic susceptibility of groundwater to contamination.

  20. Three-Dimensional Flow of Nanofluid Induced by an Exponentially Stretching Sheet: An Application to Solar Energy

    PubMed Central

    Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.

    2015-01-01

    This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857

  1. Velocity distributions of granular gases with drag and with long-range interactions.

    PubMed

    Kohlstedt, K; Snezhko, A; Sapozhnikov, M V; Aranson, I S; Olafsen, J S; Ben-Naim, E

    2005-08-05

    We study velocity statistics of electrostatically driven granular gases. For two different experiments, (i) nonmagnetic particles in a viscous fluid and (ii) magnetic particles in air, the velocity distribution is non-Maxwellian, and its high-energy tail is exponential, P(upsilon) approximately exp(-/upsilon/). This behavior is consistent with the kinetic theory of driven dissipative particles. For particles immersed in a fluid, viscous damping is responsible for the exponential tail, while for magnetic particles, long-range interactions cause the exponential tail. We conclude that velocity statistics of dissipative gases are sensitive to the fluid environment and to the form of the particle interaction.

  2. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  3. Water repellent properties of dispersed metals containing low-dimensional forms of ammonium compounds on the surface

    NASA Astrophysics Data System (ADS)

    Syrkov, A. G.; Kabirov, V. R.; Silivanov, M. O.

    2017-07-01

    For the first time the change of the water repellent properties of dispersed copper, modified using quaternary ammonium compounds on 24 h time scale in saturated water vapours was studied. Exponential time dependences of the water repellent properties of dispersed copper with adsopted QAC were derived and characterized. It was established that the samples modified in mixed and consistent modes by both modifiers reach the saturation state faster than others, due to the small number of hydrophilic centers on the surface of metals. The last conclusion was confirmed by the distribution spectra of centers of adsorption, which were obtained by the adsorption of acid-base indicators for more dispersed samples based on aluminum powder.

  4. A monitoring tool for performance improvement in plastic surgery at the individual level.

    PubMed

    Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J

    2013-05-01

    The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.

  5. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  7. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  8. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  9. Mixed boundary-value problem for an orthotropic rectangular strip with variable coefficients of elasticity

    NASA Astrophysics Data System (ADS)

    Sargsyan, M. Z.; Poghosyan, H. M.

    2018-04-01

    A dynamical problem for a rectangular strip with variable coefficients of elasticity is solved by an asymptotic method. It is assumed that the strip is orthotropic, the elasticity coefficients are exponential functions of y, and mixed boundary conditions are posed. The solution of the inner problem is obtained using Bessel functions.

  10. Phase mixing of Alfvén waves in axisymmetric non-reflective magnetic plasma configurations

    NASA Astrophysics Data System (ADS)

    Petrukhin, N. S.; Ruderman, M. S.; Shurgalina, E. G.

    2018-02-01

    We study damping of phase-mixed Alfvén waves propagating in non-reflective axisymmetric magnetic plasma configurations. We derive the general equation describing the attenuation of the Alfvén wave amplitude. Then we applied the general theory to a particular case with the exponentially divergent magnetic field lines. The condition that the configuration is non-reflective determines the variation of the plasma density along the magnetic field lines. The density profiles exponentially decreasing with the height are not among non-reflective density profiles. However, we managed to find non-reflective profiles that fairly well approximate exponentially decreasing density. We calculate the variation of the total wave energy flux with the height for various values of shear viscosity. We found that to have a substantial amount of wave energy dissipated at the lower corona, one needs to increase shear viscosity by seven orders of magnitude in comparison with the value given by the classical plasma theory. An important result that we obtained is that the efficiency of the wave damping strongly depends on the density variation with the height. The stronger the density decrease, the weaker the wave damping is. On the basis of this result, we suggested a physical explanation of the phenomenon of the enhanced wave damping in equilibrium configurations with exponentially diverging magnetic field lines.

  11. Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.

    2014-11-01

    We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.

  12. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  13. Exponential Boundary Observers for Pressurized Water Pipe

    NASA Astrophysics Data System (ADS)

    Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel

    2015-11-01

    This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.

  14. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  15. Phenomenology of stochastic exponential growth

    NASA Astrophysics Data System (ADS)

    Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya

    2017-06-01

    Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.

  16. Use of chemical and isotopic tracers to assess nitrate contamination and ground-water age, Woodville Karst Plain, USA

    USGS Publications Warehouse

    Katz, B.G.; Chelette, A.R.; Pratt, T.R.

    2004-01-01

    Concerns regarding ground-water contamination in the Woodville Karst Plain have arisen due to a steady increase in nitrate-N concentrations (0.25-0.90 mg/l) during the past 30 years in Wakulla Springs, a large regional discharge point for water (9.6 m3/s) from the Upper Floridan aquifer (UFA). Multiple isotopic and chemical tracers were used with geochemical and lumped-parameter models (exponential mixing (EM), dispersion, and combined exponential piston flow) to assess: (1) the sources and extent of nitrate contamination of ground water and springs, and (2) mean transit times (ages) of ground water. Delta 15N-NO3 values (1.7-13.8???) indicated that nitrate in ground water originated from localized sources of inorganic fertilizer and human/animal wastes. Nitrate in spring waters (??15N-NO3=5.3-8.9???) originated from both inorganic and organic N sources. Nitrate-N concentrations (1.0 mg/l) were associated with shallow wells (open intervals less than 15 m below land surface), elevated nitrate concentrations in deeper wells are consistent with mixtures of water from shallow and deep zones in the UFA as indicated from geochemical mixing models and the distribution of mean transit times (5-90 years) estimated using lumped-parameter flow models. Ground water with mean transit times of 10 years or less tended to have higher dissolved organic carbon concentrations, lower dissolved solids, and lower calcite saturation indices than older waters, indicating mixing with nearby surface water that directly recharges the aquifer through sinkholes. Significantly higher values of pH, magnesium, dolomite saturation index, and phosphate in springs and deep water (>45 m) relative to a shallow zone (<45 m) were associated with longer ground-water transit times (50-90 years). Chemical differences with depth in the aquifer result from deep regional flow of water recharged through low permeability sediments (clays and clayey sands of the Hawthorn Formation) that overlie the UFA upgradient from the karst plain.

  17. 1/f oscillations in a model of moth populations oriented by diffusive pheromones

    NASA Astrophysics Data System (ADS)

    Barbosa, L. A.; Martins, M. L.; Lima, E. R.

    2005-01-01

    An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.

  18. Posterior propriety for hierarchical models with log-likelihoods that have norm bounds

    DOE PAGES

    Michalak, Sarah E.; Morris, Carl N.

    2015-07-17

    Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less

  19. Exploring conservative islands using correlated and uncorrelated noise

    NASA Astrophysics Data System (ADS)

    da Silva, Rafael M.; Manchein, Cesar; Beims, Marcus W.

    2018-02-01

    In this work, noise is used to analyze the penetration of regular islands in conservative dynamical systems. For this purpose we use the standard map choosing nonlinearity parameters for which a mixed phase space is present. The random variable which simulates noise assumes three distributions, namely equally distributed, normal or Gaussian, and power law (obtained from the same standard map but for other parameters). To investigate the penetration process and explore distinct dynamical behaviors which may occur, we use recurrence time statistics (RTS), Lyapunov exponents and the occupation rate of the phase space. Our main findings are as follows: (i) the standard deviations of the distributions are the most relevant quantity to induce the penetration; (ii) the penetration of islands induce power-law decays in the RTS as a consequence of enhanced trapping; (iii) for the power-law correlated noise an algebraic decay of the RTS is observed, even though sticky motion is absent; and (iv) although strong noise intensities induce an ergodic-like behavior with exponential decays of RTS, the largest Lyapunov exponent is reminiscent of the regular islands.

  20. Asteroid taxonomy and the distribution of the compositional types

    NASA Technical Reports Server (NTRS)

    Zellner, B.

    1979-01-01

    Physical observations of minor planets documented in the TRIAD computer file are used to classify 752 objects into the broad compositional types C, S, M, E, R, and U (unclassifiable) according to the prescriptions adopted by Bowell et al. (1978). Diameters are computed from the photometric magnitude using radiometric and/or polarimetric data where available, or else from albedos characteristic of the indicated type. An analysis of the observational selection effects leads to tabulation of the actual number of asteroids, as a function of type and diameter, in each of 15 orbital element zones. For the whole main belt the population is 75% of type C, 15% of type S, and 10% of other types, with no belt-wide dependence of the mixing ratios on diameter. In some zones the logarithmic diameter-frequency relations are decidedly nonlinear. The relative frequency of S-type objects decreases smoothly outward through the main belt, with exponential scale length 0.5 AU. The rarer types show a more chaotic, but generally flatter, distribution over distance. Characteristic type distributions, contrasting with the background population, are found for the Eos, Koronis, Nysa and Themis families.

  1. A study on some urban bus transport networks

    NASA Astrophysics Data System (ADS)

    Chen, Yong-Zhou; Li, Nan; He, Da-Ren

    2007-03-01

    In this paper, we present the empirical investigation results on the urban bus transport networks (BTNs) of four major cities in China. In BTN, nodes are bus stops. Two nodes are connected by an edge when the stops are serviced by a common bus route. The empirical results show that the degree distributions of BTNs take exponential function forms. Other two statistical properties of BTNs are also considered, and they are suggested as the distributions of so-called “the number of stops in a bus route” (represented by S) and “the number of bus routes a stop joins” (by R). The distributions of R also show exponential function forms, while the distributions of S follow asymmetric, unimodal functions. To explain these empirical results and attempt to simulate a possible evolution process of BTN, we introduce a model by which the analytic and numerical result obtained agrees well with the empirical facts. Finally, we also discuss some other possible evolution cases, where the degree distribution shows a power law or an interpolation between the power law and the exponential decay.

  2. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  3. Compatible estimators of the components of change for a rotating panel forest inventory design

    Treesearch

    Francis A. Roesch

    2007-01-01

    This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...

  4. Fermion masses and mixing in general warped extra dimensional models

    NASA Astrophysics Data System (ADS)

    Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel

    2015-06-01

    We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.

  5. Analysis of the Chinese air route network as a complex network

    NASA Astrophysics Data System (ADS)

    Cai, Kai-Quan; Zhang, Jun; Du, Wen-Bo; Cao, Xian-Bin

    2012-02-01

    The air route network, which supports all the flight activities of the civil aviation, is the most fundamental infrastructure of air traffic management system. In this paper, we study the Chinese air route network (CARN) within the framework of complex networks. We find that CARN is a geographical network possessing exponential degree distribution, low clustering coefficient, large shortest path length and exponential spatial distance distribution that is obviously different from that of the Chinese airport network (CAN). Besides, via investigating the flight data from 2002 to 2010, we demonstrate that the topology structure of CARN is homogeneous, howbeit the distribution of flight flow on CARN is rather heterogeneous. In addition, the traffic on CARN keeps growing in an exponential form and the increasing speed of west China is remarkably larger than that of east China. Our work will be helpful to better understand Chinese air traffic systems.

  6. Colloquium: Statistical mechanics of money, wealth, and income

    NASA Astrophysics Data System (ADS)

    Yakovenko, Victor M.; Rosser, J. Barkley, Jr.

    2009-10-01

    This Colloquium reviews statistical models for money, wealth, and income distributions developed in the econophysics literature since the late 1990s. By analogy with the Boltzmann-Gibbs distribution of energy in physics, it is shown that the probability distribution of money is exponential for certain classes of models with interacting economic agents. Alternative scenarios are also reviewed. Data analysis of the empirical distributions of wealth and income reveals a two-class distribution. The majority of the population belongs to the lower class, characterized by the exponential (“thermal”) distribution, whereas a small fraction of the population in the upper class is characterized by the power-law (“superthermal”) distribution. The lower part is very stable, stationary in time, whereas the upper part is highly dynamical and out of equilibrium.

  7. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  8. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  9. The dynamics of photoinduced defect creation in amorphous chalcogenides: The origin of the stretched exponential function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, R. J.; Shimakawa, K.; Department of Electrical and Electronic Engineering, Gifu University, Gifu 501-1193

    The article discusses the dynamics of photoinduced defect creations (PDC) in amorphous chalcogenides, which is described by the stretched exponential function (SEF), while the well known photodarkening (PD) and photoinduced volume expansion (PVE) are governed only by the exponential function. It is shown that the exponential distribution of the thermal activation barrier produces the SEF in PDC, suggesting that thermal energy, as well as photon energy, is incorporated in PDC mechanisms. The differences in dynamics among three major photoinduced effects (PD, PVE, and PDC) in amorphous chalcogenides are now well understood.

  10. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  11. Income inequality in Romania: The exponential-Pareto distribution

    NASA Astrophysics Data System (ADS)

    Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan

    2017-03-01

    We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.

  12. On the q-type distributions

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2007-04-01

    Various q-type distributions have appeared in the physics literature in the recent years, see e.g. L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57. It is pointed out in the paper that many of these are the same as or particular cases of what has been known in the statistics literature. Several of these statistical distributions are discussed and references provided. We feel that this paper could be of assistance for modeling problems of the type considered by L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57 and others.

  13. Intercomparison of Large-Eddy Simulations of Arctic Mixed-Phase Clouds: Importance of Ice Size Distribution Assumptions

    NASA Technical Reports Server (NTRS)

    Ovchinnikov, Mikhail; Ackerman, Andrew S.; Avramov, Alexander; Cheng, Anning; Fan, Jiwen; Fridlind, Ann M.; Ghan, Steven; Harrington, Jerry; Hoose, Corinna; Korolev, Alexei; hide

    2014-01-01

    Large-eddy simulations of mixed-phase Arctic clouds by 11 different models are analyzed with the goal of improving understanding and model representation of processes controlling the evolution of these clouds. In a case based on observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), it is found that ice number concentration, Ni, exerts significant influence on the cloud structure. Increasing Ni leads to a substantial reduction in liquid water path (LWP), in agreement with earlier studies. In contrast to previous intercomparison studies, all models here use the same ice particle properties (i.e., mass-size, mass-fall speed, and mass-capacitance relationships) and a common radiation parameterization. The constrained setup exposes the importance of ice particle size distributions (PSDs) in influencing cloud evolution. A clear separation in LWP and IWP predicted by models with bin and bulk microphysical treatments is documented and attributed primarily to the assumed shape of ice PSD used in bulk schemes. Compared to the bin schemes that explicitly predict the PSD, schemes assuming exponential ice PSD underestimate ice growth by vapor deposition and overestimate mass-weighted fall speed leading to an underprediction of IWP by a factor of two in the considered case. Sensitivity tests indicate LWP and IWP are much closer to the bin model simulations when a modified shape factor which is similar to that predicted by bin model simulation is used in bulk scheme. These results demonstrate the importance of representation of ice PSD in determining the partitioning of liquid and ice and the longevity of mixed-phase clouds.

  14. An Oil-Stream Photomicrographic Aeroscope for Obtaining Cloud Liquid-Water Content and Droplet Size Distributions in Flight

    NASA Technical Reports Server (NTRS)

    Hacker, Paul T.

    1956-01-01

    An airborne cloud aeroscope by which droplet size, size distribution, and liquid-water content of clouds can be determined has been developed and tested in flight and in wind tunnels with water sprays. In this aeroscope the cloud droplets are continuously captured in a stream of oil, which Is then photographed by a photomicrographic camera. The droplet size and size distribution can be determined directly from the photographs. With the droplet size distribution known, the liquid-water content of the cloud can be computed from the geometry of the aeroscope, the airspeed, and the oil-flow rate. The aeroscope has the following features: Data are obtained semi-automatically, and permanent data are taken in the form of photographs. A single picture usually contains a sufficient number of droplets to establish the droplet size distribution. Cloud droplets are continuously captured in the stream of oil, but pictures are taken at Intervals. The aeroscope can be operated in icing and non-icing conditions. Because of mixing of oil in the instrument, the droplet-distribution patterns and liquid-water content values from a single picture are exponentially weighted average values over a path length of about 3/4 mile at 150 miles per hour. The liquid-water contents, volume-median diameters, and distribution patterns obtained on test flights and in the Lewis icing tunnel are similar to previously published data.

  15. Reproducibility of the exponential rise technique of CO(2) rebreathing for measuring P(v)CO(2) and C(v)CO(2 )to non-invasively estimate cardiac output during incremental, maximal treadmill exercise.

    PubMed

    Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E

    2004-05-01

    The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.

  16. The Supermarket Model with Bounded Queue Lengths in Equilibrium

    NASA Astrophysics Data System (ADS)

    Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.

    2018-04-01

    In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.

  17. Role of the locus coeruleus in the emergence of power law wake bouts in a model of the brainstem sleep-wake system through early infancy.

    PubMed

    Patel, Mainak; Rangan, Aaditya

    2017-08-07

    Infant rats randomly cycle between the sleeping and waking states, which are tightly correlated with the activity of mutually inhibitory brainstem sleep and wake populations. Bouts of sleep and wakefulness are random; from P2-P10, sleep and wake bout lengths are exponentially distributed with increasing means, while during P10-P21, the sleep bout distribution remains exponential while the distribution of wake bouts gradually transforms to power law. The locus coeruleus (LC), via an undeciphered interaction with sleep and wake populations, has been shown experimentally to be responsible for the exponential to power law transition. Concurrently during P10-P21, the LC undergoes striking physiological changes - the LC exhibits strong global 0.3 Hz oscillations up to P10, but the oscillation frequency gradually rises and synchrony diminishes from P10-P21, with oscillations and synchrony vanishing at P21 and beyond. In this work, we construct a biologically plausible Wilson Cowan-style model consisting of the LC along with sleep and wake populations. We show that external noise and strong reciprocal inhibition can lead to switching between sleep and wake populations and exponentially distributed sleep and wake bout durations as during P2-P10, with the parameters of inhibition between the sleep and wake populations controlling mean bout lengths. Furthermore, we show that the changing physiology of the LC from P10-P21, coupled with reciprocal excitation between the LC and wake population, can explain the shift from exponential to power law of the wake bout distribution. To our knowledge, this is the first study that proposes a plausible biological mechanism, which incorporates the known changing physiology of the LC, for tying the developing sleep-wake circuit and its interaction with the LC to the transformation of sleep and wake bout dynamics from P2-P21. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Statistical mechanics of money and income

    NASA Astrophysics Data System (ADS)

    Dragulescu, Adrian; Yakovenko, Victor

    2001-03-01

    Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.

  19. Heavy tailed bacterial motor switching statistics define macroscopic transport properties during upstream contamination by E. coli

    NASA Astrophysics Data System (ADS)

    Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.

    The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.

  20. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  1. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  2. Non-Poissonian Distribution of Tsunami Waiting Times

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2007-12-01

    Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.

  3. Multilevel mixed effects parametric survival models using adaptive Gauss-Hermite quadrature with application to recurrent events and individual participant data meta-analysis.

    PubMed

    Crowther, Michael J; Look, Maxime P; Riley, Richard D

    2014-09-28

    Multilevel mixed effects survival models are used in the analysis of clustered survival data, such as repeated events, multicenter clinical trials, and individual participant data (IPD) meta-analyses, to investigate heterogeneity in baseline risk and covariate effects. In this paper, we extend parametric frailty models including the exponential, Weibull and Gompertz proportional hazards (PH) models and the log logistic, log normal, and generalized gamma accelerated failure time models to allow any number of normally distributed random effects. Furthermore, we extend the flexible parametric survival model of Royston and Parmar, modeled on the log-cumulative hazard scale using restricted cubic splines, to include random effects while also allowing for non-PH (time-dependent effects). Maximum likelihood is used to estimate the models utilizing adaptive or nonadaptive Gauss-Hermite quadrature. The methods are evaluated through simulation studies representing clinically plausible scenarios of a multicenter trial and IPD meta-analysis, showing good performance of the estimation method. The flexible parametric mixed effects model is illustrated using a dataset of patients with kidney disease and repeated times to infection and an IPD meta-analysis of prognostic factor studies in patients with breast cancer. User-friendly Stata software is provided to implement the methods. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Periodic bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  5. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  6. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  7. A mathematical model for evolution and SETI.

    PubMed

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  8. Modelling Evolution and SETI Mathematically

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2012-05-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factor increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions constrained between the time axis and the exponential growth curve. Finally, since each lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  9. A Mathematical Model for Evolution and SETI

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-12-01

    Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.

  10. Accounting for inherent variability of growth in microbial risk assessment.

    PubMed

    Marks, H M; Coleman, M E

    2005-04-15

    Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.

  11. Graphical analysis for gel morphology II. New mathematical approach for stretched exponential function with β>1

    NASA Astrophysics Data System (ADS)

    Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu

    2005-10-01

    A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.

  12. Statistics of Advective Stretching in Three-dimensional Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.

    2009-09-01

    We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.

  13. Turbulent Mixing in Exponential Transverse Jets

    DTIC Science & Technology

    1990-09-30

    parameter. The flame length of the jets is a direct measurement of the molecular scale mixing rate. ACCOMPLISHMENTS From observations of the trajectory...and cross-sectional size of the vortices, as well as the flame length , our measurements reveal the following: i) Under acceleration, the roll up and... flame lengths are a weak maximum when the acceleration parameter (x is about unity. For large cc, flame lengths slowly decline with increasing a, in

  14. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  15. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  16. Heterogeneous characters modeling of instant message services users’ online behavior

    PubMed Central

    Fang, Yajun; Horn, Berthold

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327

  17. Heterogeneous characters modeling of instant message services users' online behavior.

    PubMed

    Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  18. Impact of oxide thickness on the density distribution of near-interface traps in 4H-SiC MOS capacitors

    NASA Astrophysics Data System (ADS)

    Zhang, Xufang; Okamoto, Dai; Hatakeyama, Tetsuo; Sometani, Mitsuru; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi

    2018-06-01

    The impact of oxide thickness on the density distribution of near-interface traps (NITs) in SiO2/4H-SiC structure was investigated. We used the distributed circuit model that had successfully explained the frequency-dependent characteristics of both capacitance and conductance under strong accumulation conditions for SiO2/4H-SiC MOS capacitors with thick oxides by assuming an exponentially decaying distribution of NITs. In this work, it was found that the exponentially decaying distribution is the most plausible approximation of the true NIT distribution because it successfully explained the frequency dependences of capacitance and conductance under strong accumulation conditions for various oxide thicknesses. The thickness dependence of the NIT density distribution was also characterized. It was found that the NIT density increases with increasing oxide thickness, and a possible physical reason was discussed.

  19. Scaling behavior of sleep-wake transitions across species

    NASA Astrophysics Data System (ADS)

    Lo, Chung-Chuan; Chou, Thomas; Ivanov, Plamen Ch.; Penzel, Thomas; Mochizuki, Takatoshi; Scammell, Thomas; Saper, Clifford B.; Stanley, H. Eugene

    2003-03-01

    Uncovering the mechanisms controlling sleep is a fascinating scientific challenge. It can be viewed as transitions of states of a very complex system, the brain. We study the time dynamics of short awakenings during sleep for three species: humans, rats and mice. We find, for all three species, that wake durations follow a power-law distribution, and sleep durations follow exponential distributions. Surprisingly, all three species have the same power-law exponent for the distribution of wake durations, but the exponential time scale of the distributions of sleep durations varies across species. We suggest that the dynamics of short awakenings are related to species-independent fluctuations of the system, while the dynamics of sleep is related to system-dependent mechanisms which change with species.

  20. Stretching of passive tracers and implications for mantle mixing

    NASA Astrophysics Data System (ADS)

    Conjeepuram, N.; Kellogg, L. H.

    2007-12-01

    Mid ocean ridge basalts(MORB) and ocean island basalts(OIB) have fundamentally different geochemical signatures. Understanding this difference requires a fundamental knowledge of the mixing processes that led to their formation. Quantitative methods used to assess mixing include examining the distribution of passive tracers, attaching time-evolution information to simulate decay of radioactive isotopes, and, for chaotic flows, calculating the Lyapunov exponent, which characterizes whether two nearby particles diverge at an exponential rate. Although effective, these methods are indirect measures of the two fundamental processes associated with mixing namely, stretching and folding. Building on work done by Kellogg and Turcotte, we present a method to compute the stretching and thinning of a passive, ellipsoidal tracer in three orthogonal directions in isoviscous, incompressible three dimensional flows. We also compute the Lyapunov exponents associated with the given system based on the quantitative measures of stretching and thinning. We test our method with two analytical and three numerical flow fields which exhibit Lagrangian turbulence. The ABC and STF class of analytical flows are a three and two parameter class of flows respectively and have been well studied for fast dynamo action. Since they generate both periodic and chaotic particle paths depending either on the starting point or on the choice of the parameters, they provide a good foundation to understand mixing. The numerical flow fields are similar to the geometries used by Ferrachat and Ricard (1998) and emulate a ridge - transform system. We also compute the stable and unstable manifolds associated with the numerical flow fields to illustrate the directions of rapid and slow mixing. We find that stretching in chaotic flow fields is significantly more effective than regular or periodic flow fields. Consequently, chaotic mixing is far more efficient than regular mixing. We also find that in the numerical flow field, there is a fundamental topological difference in the regions exhibiting slow or regular mixing for different model geometries.

  1. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  2. Global synchronization of memristive neural networks subject to random disturbances via distributed pinning control.

    PubMed

    Guo, Zhenyuan; Yang, Shaofu; Wang, Jun

    2016-12-01

    This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  4. Bonus-Malus System with the Claim Frequency Distribution is Geometric and the Severity Distribution is Truncated Weibull

    NASA Astrophysics Data System (ADS)

    Santi, D. N.; Purnaba, I. G. P.; Mangku, I. W.

    2016-01-01

    Bonus-Malus system is said to be optimal if it is financially balanced for insurance companies and fair for policyholders. Previous research about Bonus-Malus system concern with the determination of the risk premium which applied to all of the severity that guaranteed by the insurance company. In fact, not all of the severity that proposed by policyholder may be covered by insurance company. When the insurance company sets a maximum bound of the severity incurred, so it is necessary to modify the model of the severity distribution into the severity bound distribution. In this paper, optimal Bonus-Malus system is compound of claim frequency component has geometric distribution and severity component has truncated Weibull distribution is discussed. The number of claims considered to follow a Poisson distribution, and the expected number λ is exponentially distributed, so the number of claims has a geometric distribution. The severity with a given parameter θ is considered to have a truncated exponential distribution is modelled using the Levy distribution, so the severity have a truncated Weibull distribution.

  5. Exponential model for option prices: Application to the Brazilian market

    NASA Astrophysics Data System (ADS)

    Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.

    2016-03-01

    In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.

  6. Particle size distribution properties in mixed-phase monsoon clouds from in situ measurements during CAIPEEX

    NASA Astrophysics Data System (ADS)

    Patade, Sachin; Prabha, T. V.; Axisa, D.; Gayatri, K.; Heymsfield, A.

    2015-10-01

    A comprehensive analysis of particle size distributions measured in situ with airborne instrumentation during the Cloud Aerosol Interaction and Precipitation Enhancement Experiment (CAIPEEX) is presented. In situ airborne observations in the developing stage of continental convective clouds during premonsoon (PRE), transition, and monsoon (MON) period at temperatures from 25 to -22°C are used in the study. The PRE clouds have narrow drop size and particle size distributions compared to monsoon clouds and showed less development of size spectra with decrease in temperature. Overall, the PRE cases had much lower values of particle number concentrations and ice water content compared to MON cases, indicating large differences in the ice initiation and growth processes between these cloud regimes. This study provided compelling evidence that in addition to dynamics, aerosol and moisture are important for modulating ice microphysical processes in PRE and MON clouds through impacts on cloud drop size distribution. Significant differences are observed in the relationship of the slope and intercept parameters of the fitted particle size distributions (PSDs) with temperature in PRE and MON clouds. The intercept values are higher in MON clouds than PRE for exponential distribution which can be attributed to higher cloud particle number concentrations and ice water content in MON clouds. The PRE clouds tend to have larger values of dispersion of gamma size distributions than MON clouds, signifying narrower spectra. The relationships between PSDs parameters are presented and compared with previous observations.

  7. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  8. Surface-water radon-222 distribution along the west-central Florida shelf

    USGS Publications Warehouse

    Smith, C.G.; Robbins, L.L.

    2012-01-01

    In February 2009 and August 2009, the spatial distribution of radon-222 in surface water was mapped along the west-central Florida shelf as collaboration between the Response of Florida Shelf Ecosystems to Climate Change project and a U.S. Geological Survey Mendenhall Research Fellowship project. This report summarizes the surface distribution of radon-222 from two cruises and evaluates potential physical controls on radon-222 fluxes. Radon-222 is an inert gas produced overwhelmingly in sediment and has a short half-life of 3.8 days; activities in surface water ranged between 30 and 170 becquerels per cubic meter. Overall, radon-222 activities were enriched in nearshore surface waters relative to offshore waters. Dilution in offshore waters is expected to be the cause of the low offshore activities. While thermal stratification of the water column during the August survey may explain higher radon-222 activities relative to the February survey, radon-222 activity and integrated surface-water inventories decreased exponentially from the shoreline during both cruises. By estimating radon-222 evasion by wind from nearby buoy data and accounting for internal production from dissolved radium-226, its radiogenic long-lived parent, a simple one-dimensional model was implemented to determine the role that offshore mixing, benthic influx, and decay have on the distribution of excess radon-222 inventories along the west Florida shelf. For multiple statistically based boundary condition scenarios (first quartile, median, third quartile, and maximum radon-222 inshore of 5 kilometers), the cross-shelf mixing rates and average nearshore submarine groundwater discharge (SGD) rates varied from 100.38 to 10-3.4 square kilometers per day and 0.00 to 1.70 centimeters per day, respectively. This dataset and modeling provide the first attempt to assess cross-shelf mixing and SGD on such a large spatial scale. Such estimates help scale up SGD rates that are often made at 1- to 10-meter resolution to a coarser but more regionally applicable scale of 1- to 10-kilometer resolution. More stringent analyses and model evaluation are required, but results and analyses presented in this report provide the foundation for conducting a more rigorous statistical assessment.

  9. Analysis of risk factors in severity of rural truck crashes.

    DOT National Transportation Integrated Search

    2016-04-01

    Trucks are a vital part of the logistics system in North Dakota. Recent energy developments have : generated exponential growth in the demand for truck services. With increased density of trucks in the : traffic mix, it is reasonable to expect some i...

  10. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  11. Analysis of two production inventory systems with buffer, retrials and different production rates

    NASA Astrophysics Data System (ADS)

    Jose, K. P.; Nair, Salini S.

    2017-09-01

    This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.

  12. A non-Boltzmannian behavior of the energy distribution for quasi-stationary regimes of the Fermi–Pasta–Ulam β system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leo, Mario, E-mail: mario.leo@le.infn.it; Leo, Rosario Antonio, E-mail: leora@le.infn.it; Tempesta, Piergiulio, E-mail: p.tempesta@fis.ucm.es

    2013-06-15

    In a recent paper [M. Leo, R.A. Leo, P. Tempesta, C. Tsallis, Phys. Rev. E 85 (2012) 031149], the existence of quasi-stationary states for the Fermi–Pasta–Ulam β system has been shown numerically, by analyzing the stability properties of the N/4-mode exact nonlinear solution. Here we study the energy distribution of the modes N/4, N/3 and N/2, when they are unstable, as a function of N and of the initial excitation energy. We observe that the classical Boltzmann weight is replaced by a different weight, expressed by a q-exponential function. -- Highlights: ► New statistical properties of the Fermi–Pasta–Ulam beta systemmore » are found. ► The energy distribution of specific observables are studied: a deviation from the standard Boltzmann behavior is found. ► A q-exponential weight should be used instead. ► The classical exponential weight is restored in the large particle limit (mesoscopic nature of the phenomenon)« less

  13. Statistical analyses support power law distributions found in neuronal avalanches.

    PubMed

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  14. Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.

    PubMed

    Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng

    2013-01-01

    Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.

  15. Non-extensive quantum statistics with particle-hole symmetry

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Shen, K. M.; Zhang, B. W.

    2015-06-01

    Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.

  16. Wealth distribution, Pareto law, and stretched exponential decay of money: Computer simulations analysis of agent-based models

    NASA Astrophysics Data System (ADS)

    Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf

    2018-01-01

    We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.

  17. Crack problem in superconducting cylinder with exponential distribution of critical-current density

    NASA Astrophysics Data System (ADS)

    Zhao, Yufeng; Xu, Chi; Shi, Liang

    2018-04-01

    The general problem of a center crack in a long cylindrical superconductor with inhomogeneous critical-current distribution is studied based on the extended Bean model for zero-field cooling (ZFC) and field cooling (FC) magnetization processes, in which the inhomogeneous parameter η is introduced for characterizing the critical-current density distribution in inhomogeneous superconductor. The effect of the inhomogeneous parameter η on both the magnetic field distribution and the variations of the normalized stress intensity factors is also obtained based on the plane strain approach and J-integral theory. The numerical results indicate that the exponential distribution of critical-current density will lead a larger trapped field inside the inhomogeneous superconductor and cause the center of the cylinder to fracture more easily. In addition, it is worth pointing out that the nonlinear field distribution is unique to the Bean model by comparing the curve shapes of the magnetization loop with homogeneous and inhomogeneous critical-current distribution.

  18. How extreme are extremes?

    NASA Astrophysics Data System (ADS)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2016-04-01

    High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude

  19. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  20. Effect of the state of internal boundaries on granite fracture nature under quasi-static compression

    NASA Astrophysics Data System (ADS)

    Damaskinskaya, E. E.; Panteleev, I. A.; Kadomtsev, A. G.; Naimark, O. B.

    2017-05-01

    Based on an analysis of the spatial distribution of hypocenters of acoustic emission signal sources and an analysis of the energy distributions of acoustic emission signals, the effect of the liquid phase and a weak electric field on the spatiotemporal nature of granite sample fracture is studied. Experiments on uniaxial compression of granite samples of natural moisture showed that the damage accumulation process is twostage: disperse accumulation of damages is followed by localized accumulation of damages in the formed macrofracture nucleus region. In energy distributions of acoustic emission signals, this transition is accompanied by a change in the distribution shape from exponential to power-law. Granite water saturation qualitatively changes the damage accumulation nature: the process is delocalized until macrofracture with the exponential energy distribution of acoustic emission signals. An exposure to a weak electric field results in a selective change in the damage accumulation nature in the sample volume.

  1. Turbulence hierarchy in a random fibre laser

    PubMed Central

    González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman

    2017-01-01

    Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    Rank distributions are collections of positive sizes ordered either increasingly or decreasingly. Many decreasing rank distributions, formed by the collective collaboration of human actions, follow an inverse power-law relation between ranks and sizes. This remarkable empirical fact is termed Zipf’s law, and one of its quintessential manifestations is the demography of human settlements — which exhibits a harmonic relation between ranks and sizes. In this paper we present a comprehensive statistical-physics analysis of rank distributions, establish that power-law and exponential rank distributions stand out as optimal in various entropy-based senses, and unveil the special role of the harmonic relation betweenmore » ranks and sizes. Our results extend the contemporary entropy-maximization view of Zipf’s law to a broader, panoramic, Gibbsian perspective of increasing and decreasing power-law and exponential rank distributions — of which Zipf’s law is one out of four pillars.« less

  3. Global exponential stability analysis on impulsive BAM neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Li, Yao-Tang; Yang, Chang-Bo

    2006-12-01

    Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.

  4. Existence and global exponential stability of periodic solution to BAM neural networks with periodic coefficients and continuously distributed delays

    NASA Astrophysics Data System (ADS)

    Zhou, distributed delays [rapid communication] T.; Chen, A.; Zhou, Y.

    2005-08-01

    By using the continuation theorem of coincidence degree theory and Liapunov function, we obtain some sufficient criteria to ensure the existence and global exponential stability of periodic solution to the bidirectional associative memory (BAM) neural networks with periodic coefficients and continuously distributed delays. These results improve and generalize the works of papers [J. Cao, L. Wang, Phys. Rev. E 61 (2000) 1825] and [Z. Liu, A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 50 (2003) 1162]. An example is given to illustrate that the criteria are feasible.

  5. On the minimum of independent geometrically distributed random variables

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David

    1994-01-01

    The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.

  6. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme

    NASA Astrophysics Data System (ADS)

    Bakoban, Rana A.

    2017-08-01

    The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.

  7. Time-delayed behaviors of transient four-wave mixing signal intensity in inverted semiconductor with carrier-injection pumping

    NASA Astrophysics Data System (ADS)

    Hu, Zhenhua; Gao, Shen; Xiang, Bowen

    2016-01-01

    An analytical expression of transient four-wave mixing (TFWM) in inverted semiconductor with carrier-injection pumping was derived from both the density matrix equation and the complex stochastic stationary statistical method of incoherent light. Numerical analysis showed that the TFWM decayed decay is towards the limit of extreme homogeneous and inhomogeneous broadenings in atoms and the decaying time is inversely proportional to half the power of the net carrier densities for a low carrier-density injection and other high carrier-density injection, while it obeys an usual exponential decay with other decaying time that is inversely proportional to half the power of the net carrier density or it obeys an unusual exponential decay with the decaying time that is inversely proportional to a third power of the net carrier density for a moderate carrier-density injection. The results can be applied to studying ultrafast carrier dephasing in the inverted semiconductors such as semiconductor laser amplifier and semiconductor optical amplifier.

  8. Investigation of stickiness influence in the anomalous transport and diffusion for a non-dissipative Fermi-Ulam model

    NASA Astrophysics Data System (ADS)

    Livorati, André L. P.; Palmero, Matheus S.; Díaz-I, Gabriel; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.

    2018-02-01

    We study the dynamics of an ensemble of non interacting particles constrained by two infinitely heavy walls, where one of them is moving periodically in time, while the other is fixed. The system presents mixed dynamics, where the accessible region for the particle to diffuse chaotically is bordered by an invariant spanning curve. Statistical analysis for the root mean square velocity, considering high and low velocity ensembles, leads the dynamics to the same steady state plateau for long times. A transport investigation of the dynamics via escape basins reveals that depending of the initial velocity ensemble, the decay rates of the survival probability present different shapes and bumps, in a mix of exponential, power law and stretched exponential decays. After an analysis of step-size averages, we found that the stable manifolds play the role of a preferential path for faster escape, being responsible for the bumps and different shapes of the survival probability.

  9. Numerical solution of mixed convection flow of an MHD Jeffery fluid over an exponentially stretching sheet in the presence of thermal radiation and chemical reaction

    NASA Astrophysics Data System (ADS)

    Shateyi, Stanford; Marewo, Gerald T.

    2018-05-01

    We numerically investigate a mixed convection model for a magnetohydrodynamic (MHD) Jeffery fluid flowing over an exponentially stretching sheet. The influence of thermal radiation and chemical reaction is also considered in this study. The governing non-linear coupled partial differential equations are reduced to a set of coupled non-linear ordinary differential equations by using similarity functions. This new set of ordinary differential equations are solved numerically using the Spectral Quasi-Linearization Method. A parametric study of physical parameters involved in this study is carried out and displayed in tabular and graphical forms. It is observed that the velocity is enhanced with increasing values of the Deborah number, buoyancy and thermal radiation parameters. Furthermore, the temperature and species concentration are decreasing functions of the Deborah number. The skin friction coefficient increases with increasing values of the magnetic parameter and relaxation time. Heat and mass transfer rates increase with increasing values of the Deborah number and buoyancy parameters.

  10. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  11. A multigrid solver for the semiconductor equations

    NASA Technical Reports Server (NTRS)

    Bachmann, Bernhard

    1993-01-01

    We present a multigrid solver for the exponential fitting method. The solver is applied to the current continuity equations of semiconductor device simulation in two dimensions. The exponential fitting method is based on a mixed finite element discretization using the lowest-order Raviart-Thomas triangular element. This discretization method yields a good approximation of front layers and guarantees current conservation. The corresponding stiffness matrix is an M-matrix. 'Standard' multigrid solvers, however, cannot be applied to the resulting system, as this is dominated by an unsymmetric part, which is due to the presence of strong convection in part of the domain. To overcome this difficulty, we explore the connection between Raviart-Thomas mixed methods and the nonconforming Crouzeix-Raviart finite element discretization. In this way we can construct nonstandard prolongation and restriction operators using easily computable weighted L(exp 2)-projections based on suitable quadrature rules and the upwind effects of the discretization. The resulting multigrid algorithm shows very good results, even for real-world problems and for locally refined grids.

  12. Vacuum statistics and stability in axionic landscapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masoumi, Ali; Vilenkin, Alexander, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu

    2016-03-01

    We investigate vacuum statistics and stability in random axionic landscapes. For this purpose we developed an algorithm for a quick evaluation of the tunneling action, which in most cases is accurate within 10%. We find that stability of a vacuum is strongly correlated with its energy density, with lifetime rapidly growing as the energy density is decreased. On the other hand, the probability P(B) for a vacuum to have a tunneling action B greater than a given value declines as a slow power law in B. This is in sharp contrast with the studies of random quartic potentials, which foundmore » a fast exponential decline of P(B). Our results suggest that the total number of relatively stable vacua (say, with B>100) grows exponentially with the number of fields N and can get extremely large for N∼> 100. The problem with this kind of model is that the stable vacua are concentrated near the absolute minimum of the potential, so the observed value of the cosmological constant cannot be explained without fine-tuning. To address this difficulty, we consider a modification of the model, where the axions acquire a quadratic mass term, due to their mixing with 4-form fields. This results in a larger landscape with a much broader distribution of vacuum energies. The number of relatively stable vacua in such models can still be extremely large.« less

  13. Deuteron spin-lattice relaxation in the presence of an activation energy distribution: application to methanols in zeolite NaX.

    PubMed

    Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M

    2013-02-01

    A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Race, gender and the econophysics of income distribution in the USA

    NASA Astrophysics Data System (ADS)

    Shaikh, Anwar; Papanikolaou, Nikolaos; Wiener, Noe

    2014-12-01

    The econophysics “two-class” theory of Yakovenko and his co-authors shows that the distribution of labor incomes is roughly exponential. This paper extends this result to US subgroups categorized by gender and race. It is well known that Males have higher average incomes than Females, and Whites have higher average incomes than African-Americans. It is also evident that social policies can affect these income gaps. Our surprising finding is that nonetheless intra-group distributions of pre-tax labor incomes are remarkably similar and remain close to exponential. This suggests that income inequality can be usefully addressed by taxation policies, and overall income inequality can be modified by also shifting the balance between labor and property incomes.

  15. Diversity of individual mobility patterns and emergence of aggregated scaling laws

    PubMed Central

    Yan, Xiao-Yong; Han, Xiao-Pu; Wang, Bing-Hong; Zhou, Tao

    2013-01-01

    Uncovering human mobility patterns is of fundamental importance to the understanding of epidemic spreading, urban transportation and other socioeconomic dynamics embodying spatiality and human travel. According to the direct travel diaries of volunteers, we show the absence of scaling properties in the displacement distribution at the individual level,while the aggregated displacement distribution follows a power law with an exponential cutoff. Given the constraint on total travelling cost, this aggregated scaling law can be analytically predicted by the mixture nature of human travel under the principle of maximum entropy. A direct corollary of such theory is that the displacement distribution of a single mode of transportation should follow an exponential law, which also gets supportive evidences in known data. We thus conclude that the travelling cost shapes the displacement distribution at the aggregated level. PMID:24045416

  16. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    PubMed

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  17. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  18. An efficient and accurate technique to compute the absorption, emission, and transmission of radiation by the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.

    1990-01-01

    CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.

  19. Analytically-derived sensitivities in one-dimensional models of solute transport in porous media

    USGS Publications Warehouse

    Knopman, D.S.

    1987-01-01

    Analytically-derived sensitivities are presented for parameters in one-dimensional models of solute transport in porous media. Sensitivities were derived by direct differentiation of closed form solutions for each of the odel, and by a time integral method for two of the models. Models are based on the advection-dispersion equation and include adsorption and first-order chemical decay. Boundary conditions considered are: a constant step input of solute, constant flux input of solute, and exponentially decaying input of solute at the upstream boundary. A zero flux is assumed at the downstream boundary. Initial conditions include a constant and spatially varying distribution of solute. One model simulates the mixing of solute in an observation well from individual layers in a multilayer aquifer system. Computer programs produce output files compatible with graphics software in which sensitivities are plotted as a function of either time or space. (USGS)

  20. Scale Dependence of Spatiotemporal Intermittence of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Siddani, Ravi K.

    2011-01-01

    It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.

  1. Agitation, Mixing, and Transfers Induced by Bubbles

    NASA Astrophysics Data System (ADS)

    Risso, Frédéric

    2018-01-01

    Bubbly flows involve bubbles randomly distributed within a liquid. At large Reynolds number, they experience an agitation that can combine shear-induced turbulence (SIT), large-scale buoyancy-driven flows, and bubble-induced agitation (BIA). The properties of BIA strongly differ from those of SIT. They have been determined from studies of homogeneous swarms of rising bubbles. Regarding the bubbles, agitation is mainly caused by the wake-induced path instability. Regarding the liquid, two contributions must be distinguished. The first one corresponds to the anisotropic flow disturbances generated near the bubbles, principally in the vertical direction. The second one is the almost isotropic turbulence induced by the flow instability through a population of bubbles, which turns out to be the main cause of horizontal fluctuations. Both contributions generate a k-3 spectral subrange and exponential probability density functions. The subsequent issue will be to understand how BIA interacts with SIT.

  2. Stochastic Model of Vesicular Sorting in Cellular Organelles

    NASA Astrophysics Data System (ADS)

    Vagne, Quentin; Sens, Pierre

    2018-02-01

    The proper sorting of membrane components by regulated exchange between cellular organelles is crucial to intracellular organization. This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations, considering the relatively small size of many organelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values lead to fast sorting but result in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well-defined sorted compartments but sorting is exponentially slow. Our results suggest an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components and highlight the importance of stochastic effects for the steady-state organization of intracellular compartments.

  3. Designing the optimal shutter sequences for the flutter shutter imaging method

    NASA Astrophysics Data System (ADS)

    Jelinek, Jan

    2010-04-01

    Acquiring iris or face images of moving subjects at larger distances using a flash to prevent the motion blur quickly runs into eye safety concerns as the acquisition distance is increased. For that reason the flutter shutter method recently proposed by Raskar et al.has generated considerable interest in the biometrics community. The paper concerns the design of shutter sequences that produce the best images. The number of possible sequences grows exponentially in both the subject' s motion velocity and desired exposure value, with their majority being useless. Because the exact solution leads to an intractable mixed integer programming problem, we propose an approximate solution based on pre - screening the sequences according to the distribution of roots in their Fourier transform. A very fast algorithm utilizing the Jury' s criterion allows the testing to be done without explicitly computing the roots, making the approach practical for moderately long sequences.

  4. Dust observations by PFS on Mars Express

    NASA Astrophysics Data System (ADS)

    Zasova, L. V.; Formisano, V.; Moroz, V. I.; Grassi, D.; Ignatiev, N. I.; Blecka, M. I.; Maturilli, A.; Palomba, E.; Piccioni, G.; Pfs Team

    Dust is always present in the Martian atmosphere with opacity, which changes from values below 0.1 (at 9 μ m) up to several units during the dust storms. From the thermal IR (LW channel of PFS) the dust opacity is retrieved in a self consistent way together with the temperature profile from the same spectrum A preliminary investigation along the orbit, which comes through Hellas, shows that the value of dust opacity anticorrelates with surface altitude. From -70 to +25 of latitude the vertical dust distribution follows the exponential low with the scale of 12 km, which corresponds to the gaseous scale height near noon and indicates for well mixed condition. The dust opacity, corresponding to the zero surface altitude, is found of 0.25+-0.05. More detailed investigations of all available data will be presented, including analysis of both short- and long- wavelength spectra of PFS.

  5. Balloon-borne observations of the development and vertical structure of the Antarctic ozone hole in 1986

    NASA Technical Reports Server (NTRS)

    Hofmann, D. J.; Harder, J. W.; Rolf, S. R.; Rosen, J. M.

    1987-01-01

    The vertical distribution of ozone measured at McMurdo Station, Antarctica using balloon-borne sensors on 33 occasions during November 6, 1986 - August 25, 1986 is described. These observations suggest a highly structured cavity confined to the 12-20 km altitude region. In the 17-19 km altitude range, the ozone volume mixing ratio declined from about 2 ppm at the end of August to about 0.5 ppm by mid-October. The average decay in this region can be described as exponential with a half life of about 25 days. While total ozone, as obtained from profile integration, declined only about 35 percent, the integrated ozone between 14 and 18 km declined more than 70 percent. Vertical ozone profiles in the vortex revealed unusual structure with major features from 1 to 5 km thick which had suffered ozone depletions as great as 90 percent.

  6. Wealth of the world's richest publicly traded companies per industry and per employee: Gamma, Log-normal and Pareto power-law as universal distributions?

    NASA Astrophysics Data System (ADS)

    Soriano-Hernández, P.; del Castillo-Mussot, M.; Campirán-Chávez, I.; Montemayor-Aldrete, J. A.

    2017-04-01

    Forbes Magazine published its list of leading or strongest publicly-traded two thousand companies in the world (G-2000) based on four independent metrics: sales or revenues, profits, assets and market value. Every one of these wealth metrics yields particular information on the corporate size or wealth size of each firm. The G-2000 cumulative probability wealth distribution per employee (per capita) for all four metrics exhibits a two-class structure: quasi-exponential in the lower part, and a Pareto power-law in the higher part. These two-class structure per capita distributions are qualitatively similar to income and wealth distributions in many countries of the world, but the fraction of firms per employee within the high-class Pareto is about 49% in sales per employee, and 33% after averaging on the four metrics, whereas in countries the fraction of rich agents in the Pareto zone is less than 10%. The quasi-exponential zone can be adjusted by Gamma or Log-normal distributions. On the other hand, Forbes classifies the G-2000 firms in 82 different industries or economic activities. Within each industry, the wealth distribution per employee also follows a two-class structure, but when the aggregate wealth of firms in each industry for the four metrics is divided by the total number of employees in that industry, then the 82 points of the aggregate wealth distribution by industry per employee can be well adjusted by quasi-exponential curves for the four metrics.

  7. Ultra-large distance modification of gravity from Lorentz symmetry breaking at the Planck scale

    NASA Astrophysics Data System (ADS)

    Gorbunov, Dmitry S.; Sibiryakov, Sergei M.

    2005-09-01

    We present an extension of the Randall-Sundrum model in which, due to spontaneous Lorentz symmetry breaking, graviton mixes with bulk vector fields and becomes quasilocalized. The masses of KK modes comprising the four-dimensional graviton are naturally exponentially small. This allows to push the Lorentz breaking scale to as high as a few tenth of the Planck mass. The model does not contain ghosts or tachyons and does not exhibit the van Dam-Veltman-Zakharov discontinuity. The gravitational attraction between static point masses becomes gradually weaker with increasing of separation and gets replaced by repulsion (antigravity) at exponentially large distances.

  8. Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.

    PubMed

    Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre

    2003-03-01

    A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.

  9. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  10. First off-time treatment prostate-specific antigen kinetics predicts survival in intermittent androgen deprivation for prostate cancer.

    PubMed

    Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier

    2016-01-01

    Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.

  11. Similarity Solutions on Mixed Convection Heat Transfer from a Horizontal Surface Saturated in a Porous Medium with Internal Heat Generation

    NASA Astrophysics Data System (ADS)

    Ferdows, M.; Liu, D.

    2017-02-01

    The aim of this work is to study the mixed convection boundary layer flow from a horizontal surface embedded in a porous medium with exponential decaying internal heat generation (IHG). Boundary layer equations are reduced to two ordinary differential equations for the dimensionless stream function and temperature with two parameters: ɛ, the mixed convection parameter, and λ, the exponent of x. This problem is numerically solved with a system of parameters using built-in codes in Maple. The influences of these parameters on velocity and temperature profiles, and the Nusselt number, are thoroughly compared and discussed.

  12. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  13. Multivariate generalized hidden Markov regression models with random covariates: Physical exercise in an elderly population.

    PubMed

    Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello

    2018-04-22

    A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Intra-Individual Response Variability Assessed by Ex-Gaussian Analysis may be a New Endophenotype for Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco

    2014-01-01

    Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.

  15. Weighted Scaling in Non-growth Random Networks

    NASA Astrophysics Data System (ADS)

    Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li

    2012-09-01

    We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

  16. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  17. Turbulent particle transport in streams: can exponential settling be reconciled with fluid mechanics?

    PubMed

    McNair, James N; Newbold, J Denis

    2012-05-07

    Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Probability distributions of bed load particle velocities, accelerations, hop distances, and travel times informed by Jaynes's principle of maximum entropy

    USGS Publications Warehouse

    Furbish, David; Schmeeckle, Mark; Schumer, Rina; Fathel, Siobhan

    2016-01-01

    We describe the most likely forms of the probability distributions of bed load particle velocities, accelerations, hop distances, and travel times, in a manner that formally appeals to inferential statistics while honoring mechanical and kinematic constraints imposed by equilibrium transport conditions. The analysis is based on E. Jaynes's elaboration of the implications of the similarity between the Gibbs entropy in statistical mechanics and the Shannon entropy in information theory. By maximizing the information entropy of a distribution subject to known constraints on its moments, our choice of the form of the distribution is unbiased. The analysis suggests that particle velocities and travel times are exponentially distributed and that particle accelerations follow a Laplace distribution with zero mean. Particle hop distances, viewed alone, ought to be distributed exponentially. However, the covariance between hop distances and travel times precludes this result. Instead, the covariance structure suggests that hop distances follow a Weibull distribution. These distributions are consistent with high-resolution measurements obtained from high-speed imaging of bed load particle motions. The analysis brings us closer to choosing distributions based on our mechanical insight.

  19. Time Correlations in Mode Hopping of Coupled Oscillators

    NASA Astrophysics Data System (ADS)

    Heltberg, Mathias L.; Krishna, Sandeep; Jensen, Mogens H.

    2017-05-01

    We study the dynamics in a system of coupled oscillators when Arnold Tongues overlap. By varying the initial conditions, the deterministic system can be attracted to different limit cycles. Adding noise, the mode hopping between different states become a dominating part of the dynamics. We simplify the system through a Poincare section, and derive a 1D model to describe the dynamics. We explain that for some parameter values of the external oscillator, the time distribution of occupancy in a state is exponential and thus memoryless. In the general case, on the other hand, it is a sum of exponential distributions characteristic of a system with time correlations.

  20. Exponential Stability of Almost Periodic Solutions for Memristor-Based Neural Networks with Distributed Leakage Delays.

    PubMed

    Xu, Changjin; Li, Peiluan; Pang, Yicheng

    2016-12-01

    In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).

  1. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  2. The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo

    NASA Astrophysics Data System (ADS)

    Lin, Yangzheng; Fichthorn, Kristen A.

    2017-10-01

    We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.

  3. Average BER of subcarrier intensity modulated free space optical systems over the exponentiated Weibull fading channels.

    PubMed

    Wang, Ping; Zhang, Lu; Guo, Lixin; Huang, Feng; Shang, Tao; Wang, Ranran; Yang, Yintang

    2014-08-25

    The average bit error rate (BER) for binary phase-shift keying (BPSK) modulation in free-space optical (FSO) links over turbulence atmosphere modeled by the exponentiated Weibull (EW) distribution is investigated in detail. The effects of aperture averaging on the average BERs for BPSK modulation under weak-to-strong turbulence conditions are studied. The average BERs of EW distribution are compared with Lognormal (LN) and Gamma-Gamma (GG) distributions in weak and strong turbulence atmosphere, respectively. The outage probability is also obtained for different turbulence strengths and receiver aperture sizes. The analytical results deduced by the generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulation. This work is helpful for the design of receivers for FSO communication systems.

  4. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  5. Microreactor-based mixing strategy suppresses product inhibition to enhance sugar yields in enzymatic hydrolysis for cellulosic biofuel production.

    PubMed

    Chakraborty, Saikat; Singh, Prasun Kumar; Paramashetti, Pawan

    2017-08-01

    A novel microreactor-based energy-efficient process of using complete convective mixing in a macroreactor till an optimal mixing time followed by no mixing in 200-400μl microreactors enhances glucose and reducing sugar yields by upto 35% and 29%, respectively, while saving 72-90% of the energy incurred on reactor mixing in the enzymatic hydrolysis of cellulose. Empirical exponential relations are provided for determining the optimal mixing time, during which convective mixing in the macroreactor promotes mass transport of the cellulase enzyme to the solid Avicel substrate, while the latter phase of no mixing in the microreactor suppresses product inhibition by preventing the inhibitors (glucose and cellobiose) from homogenizing across the reactor. Sugar yield increases linearly with liquid to solid height ratio (r h ), irrespective of substrate loading and microreactor size, since large r h allows the inhibitors to diffuse in the liquid away from the solids, thus reducing product inhibition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Analysis of domestic refrigerator temperatures and home storage time distributions for shelf-life studies and food safety risk assessment.

    PubMed

    Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie

    2017-06-01

    In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Evaluation of ambient dose equivalent rates influenced by vertical and horizontal distribution of radioactive cesium in soil in Fukushima Prefecture.

    PubMed

    Malins, Alex; Kurikami, Hiroshi; Nakama, Shigeo; Saito, Tatsuo; Okumura, Masahiko; Machida, Masahiko; Kitamura, Akihiro

    2016-01-01

    The air dose rate in an environment contaminated with (134)Cs and (137)Cs depends on the amount, depth profile and horizontal distribution of these contaminants within the ground. This paper introduces and verifies a tool that models these variables and calculates ambient dose equivalent rates at 1 m above the ground. Good correlation is found between predicted dose rates and dose rates measured with survey meters in Fukushima Prefecture in areas contaminated with radiocesium from the Fukushima Dai-ichi Nuclear Power Plant accident. This finding is insensitive to the choice for modeling the activity depth distribution in the ground using activity measurements of collected soil layers, or by using exponential and hyperbolic secant fits to the measurement data. Better predictions are obtained by modeling the horizontal distribution of radioactive cesium across an area if multiple soil samples are available, as opposed to assuming a spatially homogeneous contamination distribution. Reductions seen in air dose rates above flat, undisturbed fields in Fukushima Prefecture are consistent with decrement by radioactive decay and downward migration of cesium into soil. Analysis of remediation strategies for farmland soils confirmed that topsoil removal and interchanging a topsoil layer with a subsoil layer result in similar reductions in the air dose rate. These two strategies are more effective than reverse tillage to invert and mix the topsoil. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Determination of the functioning parameters in asymmetrical flow field-flow fractionation with an exponential channel.

    PubMed

    Déjardin, P

    2013-08-30

    The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A.; Jarvis, S.; Hewitt, N.

    2015-01-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end use. With respect to energy, growth has been near exponential for the last 160 years. We attempt to show that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near optimal directed networks. If so, the distribution efficiencies of these networks must decline as they expand due to path lengths becoming longer and more tortuous. To maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system: namely at the points of acquisition and end use. We postulate that the maintenance of growth at the specific rate of ~2.4% yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  10. Plume characteristics of MPD thrusters: A preliminary examination

    NASA Technical Reports Server (NTRS)

    Myers, Roger M.

    1989-01-01

    A diagnostics facility for MPD thruster plume measurements was built and is currently undergoing testing. The facility includes electrostatic probes for electron temperature and density measurements, Hall probes for magnetic field and current distribution mapping, and an imaging system to establish the global distribution of plasma species. Preliminary results for MPD thrusters operated at power levels between 30 and 60 kW with solenoidal applied magnetic fields show that the electron density decreases exponentially from 1x10(2) to 2x10(18)/cu m over the first 30 cm of the expansion, while the electron temperature distribution is relatively uniform, decreasing from approximately 2.5 eV to 1.5 eV over the same distance. The radiant intensity of the ArII 4879 A line emission also decays exponentially. Current distribution measurements indicate that a significant fraction of the discharge current is blown into the plume region, and that its distribution depends on the magnitudes of both the discharge current and the applied magnetic field.

  11. Optical absorption, TL and IRSL of basic plagioclase megacrysts from the pinacate (Sonora, Mexico) quaternary alkalic volcanics.

    PubMed

    Chernov, V; Paz-Moreno, F; Piters, T M; Barboza-Flores, M

    2006-01-01

    The paper presents the first results of an investigation on optical absorption (OA), thermally and infrared stimulated luminescence (TL and IRSL) of the Pinacate plagioclase (labradorite). The OA spectra reveal two bands with maxima at 1.0 and 3.2 eV connected with absorption of the Fe3+ and Fe2+ and IR absorption at wavelengths longer than 2700 nm. The ultraviolet absorption varies exponentially with the photon energy following the 'vitreous' empirical Urbach rule indicating exponential distribution of localised states in the forbidden band. The natural TL is peaked at 700 K. Laboratory beta irradiation creates a very broad TL peak with maximum at 430 K. The change of the 430 K TL peak shape under the thermal cleaning procedure and dark storage after irradiation reveals a monotonous increasing of the activation energy that can be explained by the exponential distribution of traps. The IRSL response is weak and exhibits a typical decay behaviour.

  12. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  13. Determination of bulk and interface density of states in metal oxide semiconductor thin-film transistors by using capacitance-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai

    2017-10-01

    A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.

  14. Distinguishing response conflict and task conflict in the Stroop task: evidence from ex-Gaussian distribution analysis.

    PubMed

    Steinhauser, Marco; Hübner, Ronald

    2009-10-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were conducted in which manual versions of a standard Stroop task (Experiment 1) and a separated Stroop task (Experiment 2) were performed under task-switching conditions. Effects of response congruency and stimulus bivalency were used to measure response conflict and task conflict, respectively. Ex-Gaussian analysis revealed that response conflict was mainly observed in the Gaussian component, whereas task conflict was stronger in the exponential component. Moreover, task conflict in the exponential component was selectively enhanced under task-switching conditions. The results suggest that ex-Gaussian analysis can be used as a tool to isolate different conflict types in the Stroop task. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  15. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  16. Exploiting the Adaptation Dynamics to Predict the Distribution of Beneficial Fitness Effects

    PubMed Central

    2016-01-01

    Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depends on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remains nearly a constant for exponentially decaying distributions and increases when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold good for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments. PMID:26990188

  17. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  18. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  19. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  20. Exponentially varying viscosity of magnetohydrodynamic mixed convection Eyring-Powell nanofluid flow over an inclined surface

    NASA Astrophysics Data System (ADS)

    Khan, Imad; Fatima, Sumreen; Malik, M. Y.; Salahuddin, T.

    2018-03-01

    This paper explores the theoretical study of the steady incompressible two dimensional MHD boundary layer flow of Eyring-Powell nanofluid over an inclined surface. The fluid is considered to be electrically conducting and the viscosity of the fluid is assumed to be varying exponentially. The governing partial differential equations (PDE's) are reduced into ordinary differential equations (ODE's) by applying similarity approach. The resulting ordinary differential equations are solved successfully by using Homotopy analysis method. The impact of pertinent parameters on velocity, concentration and temperature profiles are examined through graphs and tables. Also coefficient of skin friction, Sherwood and Nusselt numbers are illustrated in tabular and graphical form.

  1. Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width.

    PubMed

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width . We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates , which have bounded hierarchy width-regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers.

  2. Estimating regional centile curves from mixed data sources and countries.

    PubMed

    van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J

    2009-10-15

    Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.

  3. In College and in Recovery: Reasons for Joining a Collegiate Recovery Program

    ERIC Educational Resources Information Center

    Laudet, Alexandre B.; Harris, Kitty; Kimball, Thomas; Winters, Ken C.; Moberg, D. Paul

    2016-01-01

    Objective: Collegiate Recovery Programs (CRPs), a campus-based peer support model for students recovering from substance abuse problems, grew exponentially in the past decade, yet remain unexplored. Methods: This mixed-methods study examines students' reasons for CRP enrollment to guide academic institutions and referral sources. Students (N =…

  4. Measurements of exciton diffusion by degenerate four-wave mixing in CdS1-xSex

    NASA Astrophysics Data System (ADS)

    Schwab, H.; Pantke, K.-H.; Hvam, J. M.; Klingshirn, C.

    1992-09-01

    We performed transient-grating experiments to study the diffusion of excitons in CdS1-xSex mixed crystals. The decay of the initially created exciton density grating is well described for t<=1 ns by a stretched-exponential function. For later times this decay changes over to a behavior that is well fitted by a simple exponential function. During resonant excitation of the localized states, we find the diffusion coefficient (D) to be considerably smaller than in the binary compounds CdSe and CdS. At 4.2 K, D is below our experimental resolution which is about 0.025 cm2/s. With increasing lattice temperature (Tlattice) the diffusion coefficient increases. It was therefore possible to prove, in a diffusion experiment, that at Tlattice<=5 K the excitons are localized, while the exciton-phonon interaction leads to a delocalization and thus to the onset of diffusion. It was possible to deduce the diffusion coefficient of the extended excitons as well as the energetic position of the mobility edge.

  5. Transverse mixing of ellipsoidal particles in a rotating drum

    NASA Astrophysics Data System (ADS)

    He, Siyuan; Gan, Jieqing; Pinson, David; Zhou, Zongyan

    2017-06-01

    Rotating drums are widely used in industry for mixing, milling, coating and drying processes. In the past decades, mixing of granular materials in rotating drums has been extensively investigated, but most of the studies are based on spherical particles. Particle shape has an influence on the flow behaviour and thus mixing behaviour, though the shape effect has as-yet received limited study. In this work, discrete element method (DEM) is employed to study the transverse mixing of ellipsoidal particles in a rotating drum. The effects of aspect ratio and rotating speed on mixing quality and mixing rate are investigated. The results show that mixing index increases exponentially with time for both spheres and ellipsoids. Particles with various aspect ratios are able to reach well-mixed states after sufficient revolutions in the rolling or cascading regime. Ellipsoids show higher mixing rate when rotational speed is set between 25 and 40 rpm. The relationship between mixing rate and aspect ratio of ellipsoids is established, demonstrating that, particles with aspect ratios of 0.5 and 2.0 achieve the highest mixing rates. Increasing rotating speed from 15 rpm to 40 rpm does not necessarily increase the mixing speed of spheres, while monotonous increase is observed for ellipsoids.

  6. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    PubMed

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.

  7. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  8. Empirical analysis of individual popularity and activity on an online music service system

    NASA Astrophysics Data System (ADS)

    Hu, Hai-Bo; Han, Ding-Yi

    2008-10-01

    Quantitative understanding of human behaviors supplies basic comprehension of the dynamics of many socio-economic systems. Based on the log data of an online music service system, we investigate the statistical characteristics of individual activity and popularity, and find that the distributions of both of them follow a stretched exponential form which interpolates between exponential and power law distribution. We also study the human dynamics on the online system and find that the distribution of interevent time between two consecutive listenings of music shows the fat tail feature. Besides, with the reduction of user activity the fat tail becomes more and more irregular, indicating different behavior patterns for users with diverse activities. The research results may shed some light on the in-depth understanding of collective behaviors in socio-economic systems.

  9. Evaluation of a mixed beam therapy for post-mastectomy breast cancer patients: bolus electron conformal therapy combined with intensity modulated photon radiotherapy and volumetric modulated photon arc therapy.

    PubMed

    Zhang, Rui; Heins, David; Sanders, Mary; Guo, Beibei; Hogstrom, Kenneth

    2018-05-10

    The purpose of this study was to assess the potential benefits and limitations of a mixed beam therapy, which combined bolus electron conformal therapy (BECT) with intensity modulated photon radiotherapy (IMRT) and volumetric modulated photon arc therapy (VMAT), for left-sided post-mastectomy breast cancer patients. Mixed beam treatment plans were produced for nine post-mastectomy radiotherapy (PMRT) patients previously treated at our clinic with VMAT alone. The mixed beam plans consisted of 40 Gy to the chest wall area using BECT, 40 Gy to the supraclavicular area using parallel opposed IMRT, and 10 Gy to the total planning target volume (PTV) by optimizing VMAT on top of the BECT+IMRT dose distribution. The treatment plans were created in a commercial treatment planning system (TPS), and all plans were evaluated based on PTV coverage, dose homogeneity index (DHI), conformity index (CI), dose to organs at risk (OARs), normal tissue complication probability (NTCP), and secondary cancer complication probability (SCCP). The standard VMAT alone planning technique was used as the reference for comparison. Both techniques produced clinically acceptable PMRT plans but with a few significant differences: VMAT showed significantly better CI (0.70 vs. 0.53, p < 0.001) and DHI (0.12 vs. 0.20, p < 0.001) over mixed beam therapy. For normal tissues, mixed beam therapy showed better OAR sparing and significantly reduced NTCP for cardiac mortality (0.23% vs. 0.80%, p = 0.01) and SCCP for contralateral breast (1.7% vs. 3.1% based on linear model, and 1.2% vs. 1.9% based on linear-exponential model, p < 0.001 in both cases), but showed significantly higher mean (50.8 Gy vs. 49.3 Gy, p < 0.001) and maximum skin doses (59.7 Gy vs. 53.3 Gy, p < 0.001) compared with VMAT. Patients with more tissue (minimum distance between the distal PTV surface and lung approximately > 0.5 cm and volume of tissue between the distal PTV surface and heart or lung approximately > 250 cm 3 ) between distal PTV surface and lung may benefit the most from mixed beam therapy. This work has demonstrated that mixed beam therapy (BECT+IMRT : VMAT = 4 : 1) produces clinically acceptable plans having reduced OAR doses and risks of side effects compared with VMAT. Even though VMAT alone produces more homogenous and conformal dose distributions, mixed beam therapy remains as a viable option for treating post-mastectomy patients, possibly leading to reduced normal tissue complications. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. A partial exponential lumped parameter model to evaluate groundwater age distributions and nitrate trends in long-screened wells

    USGS Publications Warehouse

    Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.

    2016-01-01

    A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.

  11. Experimental evidence of chaotic mixing at pore scale in 3D porous media

    NASA Astrophysics Data System (ADS)

    Heyman, J.; Turuban, R.; Jimenez Martinez, J.; Lester, D. R.; Meheust, Y.; Le Borgne, T.

    2017-12-01

    Mixing of dissolved chemical species in porous media plays a central role in many natural and industrial processes, such as contaminant transport and degradation in soils, oxygen and nitrates delivery in river beds, clogging in geothermal systems, CO2 sequestration. In particular, incomplete mixing at the pore scale may strongly affect the spatio-temporal distribution of reaction rates in soils and rocks, questioning the validity of diffusion-reaction models at the Darcy scale. Recent theoretical [1] and numerical [2] studies of flow in idealized porous media have suggested that fluid mixing may be chaotic at pore scale, hence pointing to a whole new set of models for mixing and reaction in porous media. However, so far this remained to be confirmed experimentally. Here we present experimental evidence of the chaotic nature of transverse mixing at the pore scale in three-dimensional porous media. We designed a novel experimental setup allowing high resolution pore scale imaging of the structure of a tracer plume in porous media columns consisting of 7, 10 and 20 mm glass bead packings. We conjointly used refractive index matching techniques, laser induced fluorescence and a moving laser-sheet to reconstruct the shape of a steady tracer plume as it gets deformed by the porous media flow. In this talk, we focus on the transverse behavior of mixing, that is, on the plane orthogonal to the main flow direction, in the limit of high Péclet numbers (diffusion is negligible). Moving away from the injection point, the plume cross-section turns quickly into complex, interlaced, lamellar structures. These structures elongated at an exponential rate, characteristic of a chaotic system, that can be characterized by an average Lyapunov exponent. We finally discuss the origin of this chaotic behavior and its most significant consequences for upscaling mixing and reactive transport in porous media. Reference:[1] D. R. Lester, G. Metcafle, M. G. Trefry, Physical Review Letters, 111, 174101 (2013) [2] R. Turuban, D. R. Lester, T. Le Borgne, and Y. Méheust (2017), under review.

  12. Metabolism of nC11 fatty acid fed to Trichoderma koningii and Penicillium janthinellum II: Production of intracellular and extracellular lipids.

    PubMed

    Monreal, Carlos M; Chahal, Amarpreet; Rowland, Owen; Smith, Myron; Schnitzer, Morris

    2014-01-01

    Little is known about the fungal metabolism of nC10 and nC11 fatty acids and their conversion into lipids. A mixed batch culture of soil fungi, T. koningii and P. janthinellum, was grown on undecanoic acid (UDA), a mixture of UDA and potato dextrose broth (UDA+PDB), and PDB alone to examine their metabolic conversion during growth. We quantified seven intracellular and extracellular lipid classes using Iatroscan thin-layer chromatography with flame ionization detection (TLC-FID). Gas chromatography with flame ionization detection (GC-FID) was used to quantify 42 individual fatty acids. Per 150 mL culture, the mixed fungal culture grown on UDA+PDB produced the highest amount of intracellular (531 mg) and extracellular (14.7 mg) lipids during the exponential phase. The content of total intracellular lipids represented 25% of the total biomass-carbon, or 10% of the total biomass dry weight produced. Fatty acids made up the largest class of intracellular lipids (457 mg/150 mL culture) and they were synthesized at a rate of 2.4 mg/h during the exponential phase, and decomposed at a rate of 1.8 mg/h during the stationary phase, when UDA+PDB was the carbon source. Palmitic acid (C16:0), stearic acid (C18:0), oleic acid (C18:1), linoleic acid (C18:2) and vaccenic acid (C18:1) accounted for >80% of the total intracellular fatty acids. During exponential growth on UDA+PDB, hydrocarbons were the largest pool of all extracellular lipids (6.5 mg), and intracellularly they were synthesized at a rate of 64 μg/h. The mixed fungal species culture of T. koningii and P. janthinellum produced many lipids for potential use as industrial feedstocks or bioproducts in biorefineries.

  13. Improving Bed Management at Wright-Patterson Medical Center

    DTIC Science & Technology

    1989-09-01

    arrival distributions are Poisson, as in Sim2, then interarrival times are distributed exponentially (Budnick, Mcleavey , and Mojena, 1988:770). While... McLeavey , D. and Mojena R., Principles of Operations Research for Management (second edition). Homewood IL: Irwin, 1988. Cannoodt, L. J. and

  14. Mixotrophic dinoflagellate Karlodinium veneficum under variable nitrogen:phosphorus stoichiometry: feeding responses and effects on larvae of the eastern oyster (Crassostrea virginica)

    NASA Astrophysics Data System (ADS)

    Lin, C.; Accoroni, S.; Glibert, P. M.

    2016-02-01

    Mixotrophic grazing activity can be promoted in response to nutrient-enriched prey and this nutritional strategy is thought to be a factor in promoting growth of some toxic microalgae under nutrient limiting conditions for the mixotroph. However, it is unclear how the nutritional condition of the predator or the prey affects mixotrophic metabolism and, consequently, potential effects on the mixotroph that may, in turn, affect early life stages of bivalves. In laboratory experiments, we measured the grazing rate of the Karlodinium veneficum on Rhodomonas salina as prey, under varied nitrogen (N): phosphorus (P) stoichiometry of both predator and prey, and we compared the nutritionally-regulated effects of K. veneficum on larvae of the eastern oyster (Crassostrea virginia). Nutritionally sufficient, N-deficient, and P-deficient K. veneficum at two growth stages (exponential and stationary) were mixed with nutritionally sufficient, N-deficient, and P-deficient R. salina, in a factorial experimental design. Regardless of the nutritional condition of K. veneficum, it showed significantly higher grazing rates with N-rich prey in exponential stage and P-rich prey in stationary stage. Maximum grazing rates of N-deficient K. veneficum on N-rich prey in exponential stage were 20-fold larger than those nutritionally sufficient K. veneficum on N-rich prey. Significantly increased larval mortality was observed in 2-day exposures to monocultures of P-deficient K. veneficum at both stages. When mixed with P-deficient (or N-rich) prey, the presence of K. veneficum resulted in significantly enhanced larval mortality, but this was not the case for N-deficient K. veneficum in exponential stage. Mixotrophic feeding for K. veneficum may not only provide nutrition flexibility needed to persist bloom but appears to increase the negative effects of K. veneficum on the survival of oyster larvae.

  15. Compact continuous-variable entanglement distillation.

    PubMed

    Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A

    2012-02-10

    We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.

  16. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    PubMed

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  17. Mixing driven by transient buoyancy flows. I. Kinematics

    NASA Astrophysics Data System (ADS)

    Duval, W. M. B.; Zhong, H.; Batur, C.

    2018-05-01

    Mixing of two miscible liquids juxtaposed inside a cavity initially separated by a divider, whose buoyancy-driven motion is initiated via impulsive perturbation of divider motion that can generate the Richtmyer-Meshkov instability, is investigated experimentally. The measured Lagrangian history of interface motion that contains the continuum mechanics of mixing shows self-similar nearly Gaussian length stretch distribution for a wide range of control parameters encompassing an approximate Hele-Shaw cell to a three-dimensional cavity. Because of the initial configuration of the interface which is parallel to the gravitational field, we show that at critical initial potential energy mixing occurs through the stretching of the interface, which shows frontogenesis, and folding, owing to an overturning motion that results in unstable density stratification and produces an ideal condition for the growth of the single wavelength Rayleigh-Taylor instability. The initial perturbation of the interface and flow field generates the Kelvin-Helmholtz instability and causes kinks at the interface, which grow into deep fingers during overturning motion and unfold into local whorl structures that merge and self-organize into the Rayleigh-Taylor morphology (RTM) structure. For a range of parametric space that yields two-dimensional flows, the unfolding of the instability through a supercritical bifurcation yields an asymmetric pairwise structure exhibiting smooth RTM that transitions to RTM fronts with fractal structures that contain small length scales for increasing Peclet numbers. The late stage of the RTM structure unfolds into an internal breakwave that breaks down through wall and internal collision and sets up the condition for self-induced sloshing that decays exponentially as the two fluids become stably stratified with a diffusive region indicating local molecular diffusion.

  18. The Comparison Study of Quadratic Infinite Beam Program on Optimization Instensity Modulated Radiation Therapy Treatment Planning (IMRTP) between Threshold and Exponential Scatter Method with CERR® In The Case of Lung Cancer

    NASA Astrophysics Data System (ADS)

    Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.

    2016-08-01

    This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.

  19. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  20. Network structures sustained by internal links and distributed lifetime of old nodes in stationary state of number of nodes

    NASA Astrophysics Data System (ADS)

    Ikeda, Nobutoshi

    2017-12-01

    In network models that take into account growth properties, deletion of old nodes has a serious impact on degree distributions, because old nodes tend to become hub nodes. In this study, we aim to provide a simple explanation for why hubs can exist even in conditions where the number of nodes is stationary due to the deletion of old nodes. We show that an exponential increase in the degree of nodes is a natural consequence of the balance between the deletion and addition of nodes as long as a preferential attachment mechanism holds. As a result, the largest degree is determined by the magnitude relationship between the time scale of the exponential growth of degrees and lifetime of old nodes. The degree distribution exhibits a power-law form ˜ k -γ with exponent γ = 1 when the lifetime of nodes is constant. However, various values of γ can be realized by introducing distributed lifetime of nodes.

  1. The Modelled Raindrop Size Distribution of Skudai, Peninsular Malaysia, Using Exponential and Lognormal Distributions

    PubMed Central

    Yakubu, Mahadi Lawan; Yusop, Zulkifli; Yusof, Fadhilah

    2014-01-01

    This paper presents the modelled raindrop size parameters in Skudai region of the Johor Bahru, western Malaysia. Presently, there is no model to forecast the characteristics of DSD in Malaysia, and this has an underpinning implication on wet weather pollution predictions. The climate of Skudai exhibits local variability in regional scale. This study established five different parametric expressions describing the rain rate of Skudai; these models are idiosyncratic to the climate of the region. Sophisticated equipment that converts sound to a relevant raindrop diameter is often too expensive and its cost sometimes overrides its attractiveness. In this study, a physical low-cost method was used to record the DSD of the study area. The Kaplan-Meier method was used to test the aptness of the data to exponential and lognormal distributions, which were subsequently used to formulate the parameterisation of the distributions. This research abrogates the concept of exclusive occurrence of convective storm in tropical regions and presented a new insight into their concurrence appearance. PMID:25126597

  2. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  3. Improved Results for Route Planning in Stochastic Transportation Networks

    NASA Technical Reports Server (NTRS)

    Boyan, Justin; Mitzenmacher, Michael

    2000-01-01

    In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.

  4. [Spatial variability and evaluation of soil heavy metal contamination in the urban-transect of Shanghai].

    PubMed

    Liu, Yun-Long; Zhang, Li-Jia; Han, Xiao-Fei; Zhuang, Teng-Fei; Shi, Zhen-Xiang; Lu, Xiao-Zhe

    2012-02-01

    Soil heavy metal concentrations along the typical urban-transect in Shanghai were analyzed to indicate the effect of urbanization and industrialization on soil environment quality. Spatial variation structure and distribution of 5 heavy metals (Cu, Cr, Mn, Pb and Zn) in the top soil of urban-transect were analyzed. The single pollution index and the composite pollution index were used to evaluate the soil heavy metal pollution. The results showed that the average concentrations of the Cu, Pb, Zn, Cr, Mn were 27.80, 28.86, 99.36, 87.72, 556.97 mg x kg(-1), respectively. Cu, Cr, Mn, Pb and Zn were medium in variability, Mn was distributed lognormally, while Cu, Cr, Pb and Zn were distributed normally. The results of semivariance analysis showed that Mn was fit for the exponential model, Cr, Pb, Cu and Zn were fit for the linear model. The spatial distribution maps of heavy metal content of the topsoil in this city-transect were produced by means of the universal kriging interpolation. Cu was spatially distributed in ribbon, Cr and Mn were distributed in island, while the spatial distribution of Pb and Zn showed the mixed characteristic of ribbon and island. With the result of soil pollution evaluation, it showed that the pollution of Cr, Zn and Pb was relatively severe. Cr, Zn, Pb, Mn and Cu were significantly correlated, and heavy metal co-contamination existed in soil. Difference of soil heavy metals pollution along "Urban-suburban-rural" was obvious, the special variation of heavy metal concentrations in the soil closely related to the degree of industrialization and urbanization of the city.

  5. Research on the exponential growth effect on network topology: Theoretical and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Shouwei; You, Zongjun

    Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.

  6. Analysis and modeling of optical crosstalk in InP-based Geiger-mode avalanche photodiode FPAs

    NASA Astrophysics Data System (ADS)

    Chau, Quan; Jiang, Xudong; Itzler, Mark A.; Entwistle, Mark; Piccione, Brian; Owens, Mark; Slomkowski, Krystyna

    2015-05-01

    Optical crosstalk is a major factor limiting the performance of Geiger-mode avalanche photodiode (GmAPD) focal plane arrays (FPAs). This is especially true for arrays with increased pixel density and broader spectral operation. We have performed extensive experimental and theoretical investigations on the crosstalk effects in InP-based GmAPD FPAs for both 1.06-μm and 1.55-μm applications. Mechanisms responsible for intrinsic dark counts are Poisson processes, and their inter-arrival time distribution is an exponential function. In FPAs, intrinsic dark counts and cross talk events coexist, and the inter-arrival time distribution deviates from purely exponential behavior. From both experimental data and computer simulations, we show the dependence of this deviation on the crosstalk probability. The spatial characteristics of crosstalk are also demonstrated. From the temporal and spatial distribution of crosstalk, an efficient algorithm to identify and quantify crosstalk is introduced.

  7. Preferential attachment and growth dynamics in complex systems

    NASA Astrophysics Data System (ADS)

    Yamasaki, Kazuko; Matia, Kaushik; Buldyrev, Sergey V.; Fu, Dongfeng; Pammolli, Fabio; Riccaboni, Massimo; Stanley, H. Eugene

    2006-09-01

    Complex systems can be characterized by classes of equivalency of their elements defined according to system specific rules. We propose a generalized preferential attachment model to describe the class size distribution. The model postulates preferential growth of the existing classes and the steady influx of new classes. According to the model, the distribution changes from a pure exponential form for zero influx of new classes to a power law with an exponential cut-off form when the influx of new classes is substantial. Predictions of the model are tested through the analysis of a unique industrial database, which covers both elementary units (products) and classes (markets, firms) in a given industry (pharmaceuticals), covering the entire size distribution. The model’s predictions are in good agreement with the data. The paper sheds light on the emergence of the exponent τ≈2 observed as a universal feature of many biological, social and economic problems.

  8. Science and Facebook: The same popularity law!

    PubMed

    Néda, Zoltán; Varga, Levente; Biró, Tamás S

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of "shares" for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4.

  9. Distributed Consensus of Stochastic Delayed Multi-agent Systems Under Asynchronous Switching.

    PubMed

    Wu, Xiaotai; Tang, Yang; Cao, Jinde; Zhang, Wenbing

    2016-08-01

    In this paper, the distributed exponential consensus of stochastic delayed multi-agent systems with nonlinear dynamics is investigated under asynchronous switching. The asynchronous switching considered here is to account for the time of identifying the active modes of multi-agent systems. After receipt of confirmation of mode's switching, the matched controller can be applied, which means that the switching time of the matched controller in each node usually lags behind that of system switching. In order to handle the coexistence of switched signals and stochastic disturbances, a comparison principle of stochastic switched delayed systems is first proved. By means of this extended comparison principle, several easy to verified conditions for the existence of an asynchronously switched distributed controller are derived such that stochastic delayed multi-agent systems with asynchronous switching and nonlinear dynamics can achieve global exponential consensus. Two examples are given to illustrate the effectiveness of the proposed method.

  10. Science and Facebook: The same popularity law!

    PubMed Central

    Varga, Levente; Biró, Tamás S.

    2017-01-01

    The distribution of scientific citations for publications selected with different rules (author, topic, institution, country, journal, etc…) collapse on a single curve if one plots the citations relative to their mean value. We find that the distribution of “shares” for the Facebook posts rescale in the same manner to the very same curve with scientific citations. This finding suggests that citations are subjected to the same growth mechanism with Facebook popularity measures, being influenced by a statistically similar social environment and selection mechanism. In a simple master-equation approach the exponential growth of the number of publications and a preferential selection mechanism leads to a Tsallis-Pareto distribution offering an excellent description for the observed statistics. Based on our model and on the data derived from PubMed we predict that according to the present trend the average citations per scientific publications exponentially relaxes to about 4. PMID:28678796

  11. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  12. Multi-step rhodopsin inactivation schemes can account for the size variability of single photon responses in Limulus ventral photoreceptors

    PubMed Central

    1994-01-01

    Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085

  13. Rapidly Mixing Gibbs Sampling for a Class of Factor Graphs Using Hierarchy Width

    PubMed Central

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2016-01-01

    Gibbs sampling on factor graphs is a widely used inference technique, which often produces good empirical results. Theoretical guarantees for its performance are weak: even for tree structured graphs, the mixing time of Gibbs may be exponential in the number of variables. To help understand the behavior of Gibbs sampling, we introduce a new (hyper)graph property, called hierarchy width. We show that under suitable conditions on the weights, bounded hierarchy width ensures polynomial mixing time. Our study of hierarchy width is in part motivated by a class of factor graph templates, hierarchical templates, which have bounded hierarchy width—regardless of the data used to instantiate them. We demonstrate a rich application from natural language processing in which Gibbs sampling provably mixes rapidly and achieves accuracy that exceeds human volunteers. PMID:27279724

  14. Exponential Family Functional data analysis via a low-rank model.

    PubMed

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  15. Obtaining the Grobner Initialization for the Ground Flash Fraction Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Solakiewicz, R.; Attele, R.; Koshak, W.

    2011-01-01

    At optical wavelengths and from the vantage point of space, the multiple scattering cloud medium obscures one's view and prevents one from easily determining what flashes strike the ground. However, recent investigations have made some progress examining the (easier, but still difficult) problem of estimating the ground flash fraction in a set of N flashes observed from space In the study by Koshak, a Bayesian inversion method was introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function of three variables (one of which is the ground flash fraction) was minimized by a numerical method. This method has formed the basis of a Ground Flash Fraction Retrieval Algorithm (GoFFRA) that is being tested as part of GOES-R GLM risk reduction.

  16. Dielectric relaxation in 0-3 PVDF-Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandra, K. P., E-mail: kpchandra23@gmail.com; Singh, Rajan; Kulkarni, A. R., E-mail: ajit2957@gmail.com

    2016-05-06

    (1-x)PVDF-xBa(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} ceramic-polymer composites with x = 0.025, 0.05, 0.10, 0.15 were prepared using melt-mixing technique. The crystal symmetry, space group and unit cell dimensions were determined from the XRD data of Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} using FullProf software, whereas crystallite size and lattice strain were estimated using Williamson-Hall approach. The distribution of Ba(Fe{sub 1/2}Nb{sub 1/2})O{sub 3} particles in the PVDF matrix were examined on the cryo-fractured surfaces using a scanning electron microscope. Cole-Cole and pseudo Cole-Cole analysis suggested the dielectric relaxation in this system to be of non-Debye type. Filler concentration dependent real and imaginary parts ofmore » dielectric constant as well as ac conductivity data followed definite trends of exponential growth types of variation.« less

  17. The implications of brain connectivity in the neuropsychology of autism

    PubMed Central

    Maximo, Jose O.; Cadena, Elyse J.; Kana, Rajesh K.

    2014-01-01

    Autism is a neurodevelopmental disorder that has been associated with atypical brain functioning. Functional connectivity MRI (fcMRI) studies examining neural networks in autism have seen an exponential rise over the last decade. Such investigations have led to characterization of autism as a distributed neural systems disorder. Studies have found widespread cortical underconnectivity, local overconnectivity, and mixed results suggesting disrupted brain connectivity as a potential neural signature of autism. In this review, we summarize the findings of previous fcMRI studies in autism with a detailed examination of their methodology, in order to better understand its potential and to delineate the pitfalls. We also address how a multimodal neuroimaging approach (incorporating different measures of brain connectivity) may help characterize the complex neurobiology of autism at a global level. Finally, we also address the potential of neuroimaging-based markers in assisting neuropsychological assessment of autism. The quest for a biomarker for autism is still ongoing, yet new findings suggest that aberrant brain connectivity may be a promising candidate. PMID:24496901

  18. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  19. High pressure liquid chromatographic gradient mixer

    DOEpatents

    Daughton, Christian G.; Sakaji, Richard H.

    1985-01-01

    A gradient mixer which effects the continuous mixing of any two miscible solvents without excessive decay or dispersion of the resultant isocratic effluent or of a linear or exponential gradient. The two solvents are fed under low or high pressure by means of two high performance liquid chromatographic pumps. The mixer comprises a series of ultra-low dead volume stainless steel tubes and low dead volume chambers. The two solvent streams impinge head-on at high fluxes. This initial nonhomogeneous mixture is then passed through a chamber packed with spirally-wound wires which cause turbulent mixing thereby homogenizing the mixture with minimum "band-broadening".

  20. High-pressure liquid chromatographic gradient mixer

    DOEpatents

    Daughton, C.G.; Sakaji, R.H.

    1982-09-08

    A gradient mixer effects the continuous mixing of any two miscible solvents without excessive decay or dispersion of the resultant isocratic effluent or of a linear or exponential gradient. The two solvents are fed under low or high pressure by means of two high performance liquid chromatographic pumps. The mixer comprises a series of ultra-low dead volume stainless steel tubes and low dead volume chambers. The two solvent streams impinge head-on at high fluxes. This initial nonhomogeneous mixture is then passed through a chamber packed with spirally-wound wires which cause turbulent mixing thereby homogenizing the mixture with minimum band-broadening.

  1. The "sweet science" of reducing periorbital lacerations in mixed martial arts.

    PubMed

    Bastidas, Nicholas; Levine, Jamie P; Stile, Frank L

    2012-01-01

    The popularity of mixed martial arts competitions and televised events has grown exponentially since its inception, and with the growth of the sport, unique facial injury patterns have surfaced. In particular, upper eyelid and brow lacerations are common and are especially troublesome given the effect of hemorrhage from these areas on the fighter's vision and thus ability to continue. We propose that the convexity of the underlying supraorbital rim is responsible for the high frequency of lacerations in this region after blunt trauma and offer a method of reducing subsequent injury by reducing its prominence.

  2. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  3. Pore‐Scale Hydrodynamics in a Progressively Bioclogged Three‐Dimensional Porous Medium: 3‐D Particle Tracking Experiments and Stochastic Transport Modeling

    PubMed Central

    Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.

    2018-01-01

    Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184

  4. Pore-Scale Hydrodynamics in a Progressively Bioclogged Three-Dimensional Porous Medium: 3-D Particle Tracking Experiments and Stochastic Transport Modeling

    NASA Astrophysics Data System (ADS)

    Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.

    2018-03-01

    Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.

  5. Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times

    NASA Astrophysics Data System (ADS)

    Fa, Kwok Sau

    2012-09-01

    In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.

  6. Strain, curvature, and twist measurements in digital holographic interferometry using pseudo-Wigner-Ville distribution based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2009-09-15

    Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less

  7. Two-state Markov-chain Poisson nature of individual cellphone call statistics

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Xie, Wen-Jie; Li, Ming-Xia; Zhou, Wei-Xing; Sornette, Didier

    2016-07-01

    Unfolding the burst patterns in human activities and social interactions is a very important issue especially for understanding the spreading of disease and information and the formation of groups and organizations. Here, we conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73 339 anonymous cellphone users, whose inter-call durations are Weibull distributed. We find that the individual call events exhibit a pattern of bursts, that high activity periods are alternated with low activity periods. In both periods, the number of calls are exponentially distributed for individuals, but power-law distributed for the population. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain, giving significant fits for nearly half of the individuals. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we purport the existence of power-law distributions, via the ‘superposition of distributions’ mechanism. Our findings shed light on the origins of bursty patterns in other human activities.

  8. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  9. Heterogeneous Link Weight Promotes the Cooperation in Spatial Prisoner's Dilemma

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Qin; Xia, Cheng-Yi; Sun, Shi-Wen; Wang, Li; Wang, Huai-Bin; Wang, Juan

    The spatial structure has often been identified as a prominent mechanism that substantially promotes the cooperation level in prisoner's dilemma game. In this paper we introduce a weighting mechanism into the spatial prisoner's dilemma game to explore the cooperative behaviors on the square lattice. Here, three types of weight distributions: exponential, power-law and uniform distributions are considered, and the weight is assigned to links between players. Through large-scale numerical simulations we find, compared with the traditional spatial game, that this mechanism can largely enhance the frequency of cooperators. For most ranges of b, we find that the power-law distribution enables the highest promotion of cooperation and the uniform one leads to the lowest enhancement, whereas the exponential one lies often between them. The great improvement of cooperation can be caused by the fact that the distributional link weight yields inhomogeneous interaction strength among individuals, which can facilitate the formation of cooperative clusters to resist the defector's invasion. In addition, the impact of amplitude of the undulation of weight distribution and noise strength on cooperation is also investigated for three kinds of weight distribution. Current researches can aid in the further understanding of evolutionary cooperation in biological and social science.

  10. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  11. System Lifetimes, The Memoryless Property, Euler's Constant, and Pi

    ERIC Educational Resources Information Center

    Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon

    2013-01-01

    A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…

  12. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    PubMed

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  13. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    PubMed Central

    Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka

    2016-01-01

    Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346

  14. CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.

  15. Calculating Formulas of Coefficient and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, F. H.; Zhou, G. D.; Ma, K.; Ma, W. J.; Cui, W. Y.; Zhang, B.

    2015-11-01

    Present studies have shown that, in the main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the distributions of neutron exposures in the nucleosynthesis regions can all be expressed by an exponential function ({ρ_{AGB}}(τ) = C/{τ_0}exp ( - τ/{τ_0})) in the effective range of values. However, the specific expressions of the proportional coefficient C and the mean neutron exposure ({τ_0}) in the formula for different models are not completely determined in the related literatures. Through dissecting the basic solving method of the exponential distribution of neutron exposures, and systematically combing the solution procedure of exposure distribution for different stellar models, the general calculating formulas as well as their auxiliary equations for calculating C and ({τ_0}) are reduced. Given the discrete distribution of neutron exposures ({P_k}), i.e. the mass ratio of the materials which have exposed to neutrons for (k) ((k = 0, 1, 2 \\cdots )) times when reaching the final distribution with respect to the materials of the He intershell, (C = - {P_1}/ln R), and ({τ_0} = - Δ τ /ln R) can be obtained. Here, (R) expresses the probability that the materials can successively experience neutron irradiation for two times in the He intershell. For the convective nucleosynthesis model (including the Ulrich model and the ({}^{13}{C})-pocket convective burning model), (R) is just the overlap factor r, namely the mass ratio of the materials which can undergo two successive thermal pulses in the He intershell. And for the (^{13}{C})-pocket radiative burning model, (R = sumlimits_{k = 1}^∞ {{P_k}} ). This set of formulas practically give the corresponding relationship between C or ({τ_0}) and the model parameters. The results of this study effectively solve the problem of analytically calculating the distribution of neutron exposures in the low-mass AGB star s-process nucleosynthesis model of (^{13}{C})-pocket radiative burning.

  16. A method for manufacturing superior set yogurt under reduced oxygen conditions.

    PubMed

    Horiuchi, H; Inoue, N; Liu, E; Fukui, M; Sasaki, Y; Sasaki, T

    2009-09-01

    The yogurt starters Lactobacillus delbrueckii ssp. bulgaricus and Streptococcus thermophilus are well-known facultatively anaerobic bacteria that can grow in oxygenated environments. We found that they removed dissolved oxygen (DO) in a yogurt mix as the fermentation progressed and that they began to produce acid actively after the DO concentration in the yogurt mix was reduced to 0 mg/kg, suggesting that the DO retarded the production of acid. Yogurt fermentation was carried out at 43 or 37 degrees C both after the DO reduction treatment and without prior treatment. Nitrogen gas was mixed and dispersed into the yogurt mix after inoculation with yogurt starter culture to reduce the DO concentration in the yogurt mix. The treatment that reduced DO concentration in the yogurt mix to approximately 0 mg/kg beforehand caused the starter culture LB81 used in this study to enter into the exponential growth phase earlier. Furthermore, the combination of reduced DO concentration in the yogurt mix beforehand and incubation at a lower temperature (37 degrees C) resulted in a superior set yogurt with a smooth texture and strong curd structure.

  17. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  18. An exactly solvable, spatial model of mutation accumulation in cancer

    NASA Astrophysics Data System (ADS)

    Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej

    2016-12-01

    One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.

  19. Stories in Networks and Networks in Stories: A Tri-Modal Model for Mixed-Methods Social Network Research on Teachers

    ERIC Educational Resources Information Center

    Baker-Doyle, Kira J.

    2015-01-01

    Social network research on teachers and schools has risen exponentially in recent years as an innovative method to reveal the role of social networks in education. However, scholars are still exploring ways to incorporate traditional quantitative methods of Social Network Analysis (SNA) with qualitative approaches to social network research. This…

  20. Turbulent combustion in aluminum-air clouds for different scale explosion fields

    NASA Astrophysics Data System (ADS)

    Kuhl, Allen L.; Balakrishnan, Kaushik; Bell, John B.; Beckner, Vincent E.

    2017-01-01

    This paper explores "scaling issues" associated with Al particle combustion in explosions. The basic idea is the following: in this non-premixed combustion system, the global burning rate is controlled by rate of turbulent mixing of fuel (Al particles) with air. From similarity considerations, the turbulent mixing rates should scale with the explosion length and time scales. However, the induction time for ignition of Al particles depends on an Arrhenius function, which is independent of the explosion length and time. To study this, we have performed numerical simulations of turbulent combustion in unconfined Al-SDF (shock-dispersed-fuel) explosion fields at different scales. Three different charge masses were assumed: 1-g, 1-kg and 1-T Al-powder charges. We found that there are two combustion regimes: an ignition regime—where the burning rate decays as a power-law function of time, and a turbulent combustion regime—where the burning rate decays exponentially with time. This exponential dependence is typical of first order reactions and the more general concept of Life Functions that control the dynamics of evolutionary systems. Details of the combustion model are described. Results, including mean and rms profiles in combustion cloud and fuel consumption histories, are presented.

  1. MIP models for connected facility location: A theoretical and computational study☆

    PubMed Central

    Gollowitzer, Stefan; Ljubić, Ivana

    2011-01-01

    This article comprises the first theoretical and computational study on mixed integer programming (MIP) models for the connected facility location problem (ConFL). ConFL combines facility location and Steiner trees: given a set of customers, a set of potential facility locations and some inter-connection nodes, ConFL searches for the minimum-cost way of assigning each customer to exactly one open facility, and connecting the open facilities via a Steiner tree. The costs needed for building the Steiner tree, facility opening costs and the assignment costs need to be minimized. We model ConFL using seven compact and three mixed integer programming formulations of exponential size. We also show how to transform ConFL into the Steiner arborescence problem. A full hierarchy between the models is provided. For two exponential size models we develop a branch-and-cut algorithm. An extensive computational study is based on two benchmark sets of randomly generated instances with up to 1300 nodes and 115,000 edges. We empirically compare the presented models with respect to the quality of obtained bounds and the corresponding running time. We report optimal values for all but 16 instances for which the obtained gaps are below 0.6%. PMID:25009366

  2. On the origin of stretched exponential (Kohlrausch) relaxation kinetics in the room temperature luminescence decay of colloidal quantum dots.

    PubMed

    Bodunov, E N; Antonov, Yu A; Simões Gamboa, A L

    2017-03-21

    The non-exponential room temperature luminescence decay of colloidal quantum dots is often well described by a stretched exponential function. However, the physical meaning of the parameters of the function is not clear in the majority of cases reported in the literature. In this work, the room temperature stretched exponential luminescence decay of colloidal quantum dots is investigated theoretically in an attempt to identify the underlying physical mechanisms associated with the parameters of the function. Three classes of non-radiative transition processes between the excited and ground states of colloidal quantum dots are discussed: long-range resonance energy transfer, multiphonon relaxation, and contact quenching without diffusion. It is shown that multiphonon relaxation cannot explain a stretched exponential functional form of the luminescence decay while such dynamics of relaxation can be understood in terms of long-range resonance energy transfer to acceptors (molecules, quantum dots, or anharmonic molecular vibrations) in the environment of the quantum dots acting as energy-donors or by contact quenching by acceptors (surface traps or molecules) distributed statistically on the surface of the quantum dots. These non-radiative transition processes are assigned to different ranges of the stretching parameter β.

  3. Application of a Short Intracellular pH Method to Flow Cytometry for Determining Saccharomyces cerevisiae Vitality ▿

    PubMed Central

    Weigert, Claudia; Steffler, Fabian; Kurz, Tomas; Shellhammer, Thomas H.; Methner, Frank-Jürgen

    2009-01-01

    The measurement of yeast's intracellular pH (ICP) is a proven method for determining yeast vitality. Vitality describes the condition or health of viable cells as opposed to viability, which defines living versus dead cells. In contrast to fluorescence photometric measurements, which show only average ICP values of a population, flow cytometry allows the presentation of an ICP distribution. By examining six repeated propagations with three separate growth phases (lag, exponential, and stationary), the ICP method previously established for photometry was transferred successfully to flow cytometry by using the pH-dependent fluorescent probe 5,6-carboxyfluorescein. The correlation between the two methods was good (r2 = 0.898, n = 18). With both methods it is possible to track the course of growth phases. Although photometry did not yield significant differences between exponentially and stationary phases (P = 0.433), ICP via flow cytometry did (P = 0.012). Yeast in an exponential phase has a unimodal ICP distribution, reflective of a homogeneous population; however, yeast in a stationary phase displays a broader ICP distribution, and subpopulations could be defined by using the flow cytometry method. In conclusion, flow cytometry yielded specific evidence of the heterogeneity in vitality of a yeast population as measured via ICP. In contrast to photometry, flow cytometry increases information about the yeast population's vitality via a short measurement, which is suitable for routine analysis. PMID:19581482

  4. Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time

    NASA Astrophysics Data System (ADS)

    Himeoka, Yusuke; Kaneko, Kunihiko

    2017-04-01

    The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.

  5. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  6. Accumulated distribution of material gain at dislocation crystal growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakin, V. I., E-mail: rakin@geo.komisc.ru

    2016-05-15

    A model for slowing down the tangential growth rate of an elementary step at dislocation crystal growth is proposed based on the exponential law of impurity particle distribution over adsorption energy. It is established that the statistical distribution of material gain on structurally equivalent faces obeys the Erlang law. The Erlang distribution is proposed to be used to calculate the occurrence rates of morphological combinatorial types of polyhedra, presenting real simple crystallographic forms.

  7. Nocturnal Dynamics of Sleep-Wake Transitions in Patients With Narcolepsy.

    PubMed

    Zhang, Xiaozhe; Kantelhardt, Jan W; Dong, Xiao Song; Krefting, Dagmar; Li, Jing; Yan, Han; Pillmann, Frank; Fietze, Ingo; Penzel, Thomas; Zhao, Long; Han, Fang

    2017-02-01

    We investigate how characteristics of sleep-wake dynamics in humans are modified by narcolepsy, a clinical condition that is supposed to destabilize sleep-wake regulation. Subjects with and without cataplexy are considered separately. Differences in sleep scoring habits as a possible confounder have been examined. Four groups of subjects are considered: narcolepsy patients from China with (n = 88) and without (n = 15) cataplexy, healthy controls from China (n = 110) and from Europe (n = 187, 2 nights each). After sleep-stage scoring and calculation of sleep characteristic parameters, the distributions of wake-episode durations and sleep-episode durations are determined for each group and fitted by power laws (exponent α) and by exponentials (decay time τ). We find that wake duration distributions are consistent with power laws for healthy subjects (China: α = 0.88, Europe: α = 1.02). Wake durations in all groups of narcolepsy patients, however, follow the exponential law (τ = 6.2-8.1 min). All sleep duration distributions are best fitted by exponentials on long time scales (τ = 34-82 min). We conclude that narcolepsy mainly alters the control of wake-episode durations but not sleep-episode durations, irrespective of cataplexy. Observed distributions of shortest wake and sleep durations suggest that differences in scoring habits regarding the scoring of short-term sleep stages may notably influence the fitting parameters but do not affect the main conclusion. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  8. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.

    PubMed

    Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew

    2014-07-08

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.

  9. Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation

    PubMed Central

    2015-01-01

    Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441

  10. Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment

    NASA Astrophysics Data System (ADS)

    Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok

    2018-05-01

    We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.

  11. Quantifying patterns of research interest evolution

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.

  12. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  13. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Cell Division and Evolution of Biological Tissues

    NASA Astrophysics Data System (ADS)

    Rivier, Nicolas; Arcenegui-Siemens, Xavier; Schliecker, Gudrun

    A tissue is a geometrical, space-filling, random cellular network; it remains in this steady state while individual cells divide. Cell division (fragmentation) is a local, elementary topological transformation which establishes statistical equilibrium of the structure. Statistical equilibrium is characterized by observable relations (Lewis, Aboav) between cell shapes, sizes and those of their neighbours, obtained through maximum entropy and topological correlation extending to nearest neighbours only, i.e. maximal randomness. For a two-dimensional tissue (epithelium), the distribution of cell shapes and that of mother and daughter cells can be obtained from elementary geometrical and physical arguments, except for an exponential factor favouring division of larger cells, and exponential and combinatorial factors encouraging a most symmetric division. The resulting distributions are very narrow, and stationarity severely restricts the range of an adjustable structural parameter

  15. Exponential Thurston maps and limits of quadratic differentials

    NASA Astrophysics Data System (ADS)

    Hubbard, John; Schleicher, Dierk; Shishikura, Mitsuhiro

    2009-01-01

    We give a topological characterization of postsingularly finite topological exponential maps, i.e., universal covers g\\colon{C}to{C}setminus\\{0\\} such that 0 has a finite orbit. Such a map either is Thurston equivalent to a unique holomorphic exponential map λ e^z or it has a topological obstruction called a degenerate Levy cycle. This is the first analog of Thurston's topological characterization theorem of rational maps, as published by Douady and Hubbard, for the case of infinite degree. One main tool is a theorem about the distribution of mass of an integrable quadratic differential with a given number of poles, providing an almost compact space of models for the entire mass of quadratic differentials. This theorem is given for arbitrary Riemann surfaces of finite type in a uniform way.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  17. Quantifying the relationship between PM2.5 concentration, visibility and planetary boundary layer height for long-lasting haze and fog-haze mixed events in Beijing

    NASA Astrophysics Data System (ADS)

    Luan, Tian; Guo, Xueliang; Guo, Lijun; Zhang, Tianhang

    2018-01-01

    Air quality and visibility are strongly influenced by aerosol loading, which is driven by meteorological conditions. The quantification of their relationships is critical to understanding the physical and chemical processes and forecasting of the polluted events. We investigated and quantified the relationship between PM2.5 (particulate matter with aerodynamic diameter is 2.5 µm and less) mass concentration, visibility and planetary boundary layer (PBL) height in this study based on the data obtained from four long-lasting haze events and seven fog-haze mixed events from January 2014 to March 2015 in Beijing. The statistical results show that there was a negative exponential function between the visibility and the PM2.5 mass concentration for both haze and fog-haze mixed events (with the same R2 of 0.80). However, the fog-haze events caused a more obvious decrease of visibility than that for haze events due to the formation of fog droplets that could induce higher light extinction. The PM2.5 concentration had an inversely linear correlation with PBL height for haze events and a negative exponential correlation for fog-haze mixed events, indicating that the PM2.5 concentration is more sensitive to PBL height in fog-haze mixed events. The visibility had positively linear correlation with the PBL height with an R2 of 0.35 in haze events and positive exponential correlation with an R2 of 0.56 in fog-haze mixed events. We also investigated the physical mechanism responsible for these relationships between visibility, PM2.5 concentration and PBL height through typical haze and fog-haze mixed event and found that a double inversion layer formed in both typical events and played critical roles in maintaining and enhancing the long-lasting polluted events. The variations of the double inversion layers were closely associated with the processes of long-wave radiation cooling in the nighttime and short-wave solar radiation reduction in the daytime. The upper-level stable inversion layer was formed by the persistent warm and humid southwestern airflow, while the low-level inversion layer was initially produced by the surface long-wave radiation cooling in the nighttime and maintained by the reduction of surface solar radiation in the daytime. The obvious descending process of the upper-level inversion layer induced by the radiation process could be responsible for the enhancement of the low-level inversion layer and the lowering PBL height, as well as high aerosol loading for these polluted events. The reduction of surface solar radiation in the daytime could be around 35 % for the haze event and 94 % for the fog-haze mixed event. Therefore, the formation and subsequent descending processes of the upper-level inversion layer should be an important factor in maintaining and strengthening the long-lasting severe polluted events, which has not been revealed in previous publications. The interactions and feedbacks between PM2.5 concentration and PBL height linked by radiation process caused a more significant and long-lasting deterioration of air quality and visibility in fog-haze mixed events. The interactions and feedbacks of all processes were particularly strong when the PM2.5 mass concentration was larger than 150-200 µg m-3.

  18. Optical properties of cells with melanin

    NASA Astrophysics Data System (ADS)

    Rohde, Barukh; Coats, Israel; Krueger, James; Gareau, Dan

    2014-02-01

    The optical properties of pigmented lesions have been studied using diffuse reflectance spectroscopy in a noninvasive configuration on optically thick samples such as skin in vivo. However, it is difficult to un-mix the effects of absorption and scattering with diffuse reflectance spectroscopy techniques due to the complex anatomical distributions of absorbing and scattering biomolecules. We present a device and technique that enables absorption and scattering measurements of tissue volumes much smaller than the optical mean-free path. Because these measurements are taken on fresh-frozen sections, they are direct measurements of the optical properties of tissue, albeit in a different hydration state than in vivo tissue. Our results on lesions from 20 patients including melanomas and nevi show the absorption spectrum of melanin in melanocytes and basal keratinocytes. Our samples consisted of fresh frozen sections that were unstained. Fitting the spectrum as an exponential decay between 500 and 1100 nm [mua = A*exp(-B*(lambda-C)) + D], we report on the fit parameters of and their variation due to biological heterogeneity as A = 4.20e4 +/- 1.57e5 [1/cm], B = 4.57e-3 +/- 1.62e-3 [1/nm], C = 210 +/- 510 [nm] , D = 613 +/- 534 [1/cm]. The variability in these results is likely due to highly heterogeneous distributions of eumelanin and pheomelanin.

  19. Diffusion in an expanding medium: Fokker-Planck equation, Green's function, and first-passage properties

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.; Abad, E.; Escudero, C.

    2016-09-01

    We present a classical, mesoscopic derivation of the Fokker-Planck equation for diffusion in an expanding medium. To this end, we take a conveniently generalized Chapman-Kolmogorov equation as the starting point. We obtain an analytical expression for the Green's function (propagator) and investigate both analytically and numerically how this function and the associated moments behave. We also study first-passage properties in expanding hyperspherical geometries. We show that in all cases the behavior is determined to a great extent by the so-called Brownian conformal time τ (t ) , which we define via the relation τ ˙=1 /a2 , where a (t ) is the expansion scale factor. If the medium expansion is driven by a power law [a (t ) ∝tγ with γ >0 ] , then we find interesting crossover effects in the mixing effectiveness of the diffusion process when the characteristic exponent γ is varied. Crossover effects are also found at the level of the survival probability and of the moments of the first passage-time distribution with two different regimes separated by the critical value γ =1 /2 . The case of an exponential scale factor is analyzed separately both for expanding and contracting media. In the latter situation, a stationary probability distribution arises in the long-time limit.

  20. A Computer Program for Practical Semivariogram Modeling and Ordinary Kriging: A Case Study of Porosity Distribution in an Oil Field

    NASA Astrophysics Data System (ADS)

    Mert, Bayram Ali; Dag, Ahmet

    2017-12-01

    In this study, firstly, a practical and educational geostatistical program (JeoStat) was developed, and then example analysis of porosity parameter distribution, using oilfield data, was presented. With this program, two or three-dimensional variogram analysis can be performed by using normal, log-normal or indicator transformed data. In these analyses, JeoStat offers seven commonly used theoretical variogram models (Spherical, Gaussian, Exponential, Linear, Generalized Linear, Hole Effect and Paddington Mix) to the users. These theoretical models can be easily and quickly fitted to experimental models using a mouse. JeoStat uses ordinary kriging interpolation technique for computation of point or block estimate, and also uses cross-validation test techniques for validation of the fitted theoretical model. All the results obtained by the analysis as well as all the graphics such as histogram, variogram and kriging estimation maps can be saved to the hard drive, including digitised graphics and maps. As such, the numerical values of any point in the map can be monitored using a mouse and text boxes. This program is available to students, researchers, consultants and corporations of any size free of charge. The JeoStat software package and source codes available at: http://www.jeostat.com/JeoStat_2017.0.rar.

  1. Decision Support System for hydrological extremes

    NASA Astrophysics Data System (ADS)

    Bobée, Bernard; El Adlouni, Salaheddine

    2014-05-01

    The study of the tail behaviour of extreme event distributions is important in several applied statistical fields such as hydrology, finance, and telecommunications. For example in hydrology, it is important to estimate adequately extreme quantiles in order to build and manage safe and effective hydraulic structures (dams, for example). Two main classes of distributions are used in hydrological frequency analysis: the class D of sub-exponential (Gamma (G2), Gumbel, Halphen type A (HA), Halphen type B (HB)…) and the class C of regularly varying distributions (Fréchet, Log-Pearson, Halphen type IB …) with a heavier tail. A Decision Support System (DSS) based on the characterization of the right tail, corresponding low probability of excedence p (high return period T=1/p, in hydrology), has been developed. The DSS allows discriminating between the class C and D and in its last version, a new prior step is added in order to test Lognormality. Indeed, the right tail of the Lognormal distribution (LN) is between the tails of distributions of the classes C and D; studies indicated difficulty with the discrimination between LN and distributions of the classes C and D. Other tools are useful to discriminate between distributions of the same class D (HA, HB and G2; see other communication). Some numerical illustrations show that, the DSS allows discriminating between Lognormal, regularly varying and sub-exponential distributions; and lead to coherent conclusions. Key words: Regularly varying distributions, subexponential distributions, Decision Support System, Heavy tailed distribution, Extreme value theory

  2. Transition from Exponential to Power Law Income Distributions in a Chaotic Market

    NASA Astrophysics Data System (ADS)

    Pellicer-Lostao, Carmen; Lopez-Ruiz, Ricardo

    Economy is demanding new models, able to understand and predict the evolution of markets. To this respect, Econophysics offers models of markets as complex systems, that try to comprehend macro-, system-wide states of the economy from the interaction of many agents at micro-level. One of these models is the gas-like model for trading markets. This tries to predict money distributions in closed economies and quite simply, obtains the ones observed in real economies. However, it reveals technical hitches to explain the power law distribution, observed in individuals with high incomes. In this work, nonlinear dynamics is introduced in the gas-like model in an effort to overcomes these flaws. A particular chaotic dynamics is used to break the pairing symmetry of agents (i, j) ⇔ (j, i). The results demonstrate that a "chaotic gas-like model" can reproduce the Exponential and Power law distributions observed in real economies. Moreover, it controls the transition between them. This may give some insight of the micro-level causes that originate unfair distributions of money in a global society. Ultimately, the chaotic model makes obvious the inherent instability of asymmetric scenarios, where sinks of wealth appear and doom the market to extreme inequality.

  3. Resource acquisition, distribution and end-use efficiencies and the growth of industrial society

    NASA Astrophysics Data System (ADS)

    Jarvis, A. J.; Jarvis, S. J.; Hewitt, C. N.

    2015-10-01

    A key feature of the growth of industrial society is the acquisition of increasing quantities of resources from the environment and their distribution for end-use. With respect to energy, the growth of industrial society appears to have been near-exponential for the last 160 years. We provide evidence that indicates that the global distribution of resources that underpins this growth may be facilitated by the continual development and expansion of near-optimal directed networks (roads, railways, flight paths, pipelines, cables etc.). However, despite this continual striving for optimisation, the distribution efficiencies of these networks must decline over time as they expand due to path lengths becoming longer and more tortuous. Therefore, to maintain long-term exponential growth the physical limits placed on the distribution networks appear to be counteracted by innovations deployed elsewhere in the system, namely at the points of acquisition and end-use of resources. We postulate that the maintenance of the growth of industrial society, as measured by global energy use, at the observed rate of ~ 2.4 % yr-1 stems from an implicit desire to optimise patterns of energy use over human working lifetimes.

  4. A mathematical model for the occurrence of historical events

    NASA Astrophysics Data System (ADS)

    Ohnishi, Teruaki

    2017-12-01

    A mathematical model was proposed for the frequency distribution of historical inter-event time τ. A basic ingredient was constructed by assuming the significance of a newly occurring historical event depending on the magnitude of a preceding event, the decrease of its significance by oblivion during the successive events, and an independent Poisson process for the occurrence of the event. The frequency distribution of τ was derived by integrating the basic ingredient with respect to all social fields and to all stake holders. The function of such a distribution was revealed as the forms of an exponential type, a power law type or an exponential-with-a-tail type depending on the values of constants appearing in the ingredient. The validity of this model was studied by applying it to the two cases of Modern China and Northern Ireland Troubles, where the τ-distribution varies depending on the different countries interacting with China and on the different stage of history of the Troubles, respectively. This indicates that history is consisted from many components with such different types of τ-distribution, which are the similar situation to the cases of other general human activities.

  5. Diffusion and Mixing in Globular Clusters

    NASA Astrophysics Data System (ADS)

    Meiron, Yohai; Kocsis, Bence

    2018-03-01

    Collisional relaxation describes the stochastic process with which a self-gravitating system near equilibrium evolves in phase-space due to the fluctuating gravitational field of the system. The characteristic timescale of this process is called the relaxation time. In this paper, we highlight the difference between two measures of the relaxation time in globular clusters: (1) the diffusion time with which the isolating integrals of motion (i.e., energy E and angular momentum magnitude L) of individual stars change stochastically and (2) the asymptotic timescale required for a family of orbits to mix in the cluster. More specifically, the former corresponds to the instantaneous rate of change of a star’s E or L, while the latter corresponds to the timescale for the stars to statistically forget their initial conditions. We show that the diffusion timescales of E and L vary systematically around the commonly used half-mass relaxation time in different regions of the cluster by a factor of ∼10 and ∼100, respectively, for more than 20% of the stars. We define the mixedness of an orbital family at any given time as the correlation coefficient between its E or L probability distribution functions and those of the whole cluster. Using Monte Carlo simulations, we find that mixedness converges asymptotically exponentially with a decay timescale that is ∼10 times the half-mass relaxation time.

  6. Determine Neuronal Tuning Curves by Exploring Optimum Firing Rate Distribution for Information Efficiency

    PubMed Central

    Han, Fang; Wang, Zhijie; Fan, Hong

    2017-01-01

    This paper proposed a new method to determine the neuronal tuning curves for maximum information efficiency by computing the optimum firing rate distribution. Firstly, we proposed a general definition for the information efficiency, which is relevant to mutual information and neuronal energy consumption. The energy consumption is composed of two parts: neuronal basic energy consumption and neuronal spike emission energy consumption. A parameter to model the relative importance of energy consumption is introduced in the definition of the information efficiency. Then, we designed a combination of exponential functions to describe the optimum firing rate distribution based on the analysis of the dependency of the mutual information and the energy consumption on the shape of the functions of the firing rate distributions. Furthermore, we developed a rapid algorithm to search the parameter values of the optimum firing rate distribution function. Finally, we found with the rapid algorithm that a combination of two different exponential functions with two free parameters can describe the optimum firing rate distribution accurately. We also found that if the energy consumption is relatively unimportant (important) compared to the mutual information or the neuronal basic energy consumption is relatively large (small), the curve of the optimum firing rate distribution will be relatively flat (steep), and the corresponding optimum tuning curve exhibits a form of sigmoid if the stimuli distribution is normal. PMID:28270760

  7. Regularization of moving boundaries in a laplacian field by a mixed Dirichlet-Neumann boundary condition: exact results.

    PubMed

    Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar

    2005-11-04

    The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.

  8. Properties of single NMDA receptor channels in human dentate gyrus granule cells

    PubMed Central

    Lieberman, David N; Mody, Istvan

    1999-01-01

    Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689

  9. Quantum Mechanical Noise in a Michelson Interferometer with Nonclassical Inputs: Nonperturbative Treatment

    NASA Technical Reports Server (NTRS)

    King, Sun-Kun

    1996-01-01

    The variances of the quantum-mechanical noise in a two-input-port Michelson interferometer within the framework of the Loudon-Ni model were solved exactly in two general cases: (1) one coherent state input and one squeezed state input, and (2) two photon number states inputs. Low intensity limit, exponential decaying signal and the noise due to mixing were discussed briefly.

  10. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  11. Base stock system for patient vs impatient customers with varying demand distribution

    NASA Astrophysics Data System (ADS)

    Fathima, Dowlath; Uduman, P. Sheik

    2013-09-01

    An optimal Base-Stock inventory policy for Patient and Impatient Customers using finite-horizon models is examined. The Base stock system for Patient and Impatient customers is a different type of inventory policy. In case of the model I, Base stock for Patient customer case is evaluated using the Truncated Exponential Distribution. The model II involves the study of Base-stock inventory policies for Impatient customer. A study on these systems reveals that the Customers wait until the arrival of the next order or the customers leaves the system which leads to lost sale. In both the models demand during the period [0, t] is taken to be a random variable. In this paper, Truncated Exponential Distribution satisfies the Base stock policy for the patient customer as a continuous model. So far the Base stock for Impatient Customers leaded to a discrete case but, in this paper we have modeled this condition into a continuous case. We justify this approach mathematically and also numerically.

  12. The topology of large Open Connectome networks for the human brain.

    PubMed

    Gastner, Michael T; Ódor, Géza

    2016-06-07

    The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.

  13. The topology of large Open Connectome networks for the human brain

    NASA Astrophysics Data System (ADS)

    Gastner, Michael T.; Ódor, Géza

    2016-06-01

    The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.

  14. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.

    2018-01-01

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.

  15. Probability Distributions for Random Quantum Operations

    NASA Astrophysics Data System (ADS)

    Schultz, Kevin

    Motivated by uncertainty quantification and inference of quantum information systems, in this work we draw connections between the notions of random quantum states and operations in quantum information with probability distributions commonly encountered in the field of orientation statistics. This approach identifies natural sample spaces and probability distributions upon these spaces that can be used in the analysis, simulation, and inference of quantum information systems. The theory of exponential families on Stiefel manifolds provides the appropriate generalization to the classical case. Furthermore, this viewpoint motivates a number of additional questions into the convex geometry of quantum operations relative to both the differential geometry of Stiefel manifolds as well as the information geometry of exponential families defined upon them. In particular, we draw on results from convex geometry to characterize which quantum operations can be represented as the average of a random quantum operation. This project was supported by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center Contract Number 2012-12050800010.

  16. GISAXS modelling of helium-induced nano-bubble formation in tungsten and comparison with TEM

    NASA Astrophysics Data System (ADS)

    Thompson, Matt; Sakamoto, Ryuichi; Bernard, Elodie; Kirby, Nigel; Kluth, Patrick; Riley, Daniel; Corr, Cormac

    2016-05-01

    Grazing-incidence small angle x-ray scattering (GISAXS) is a powerful non-destructive technique for the measurement of nano-bubble formation in tungsten under helium plasma exposure. Here, we present a comparative study between transmission electron microscopy (TEM) and GISAXS measurements of nano-bubble formation in tungsten exposed to helium plasma in the Large Helical Device (LHD) fusion experiment. Both techniques are in excellent agreement, suggesting that nano-bubbles range from spheroidal to ellipsoidal, displaying exponential diameter distributions with mean diameters μ=0.68 ± 0.04 nm and μ=0.6 ± 0.1 nm measured by TEM and GISAXS respectively. Depth distributions were also computed, with calculated exponential depth distributions with mean depths of 8.4 ± 0.5 nm and 9.1 ± 0.4 nm for TEM and GISAXS. In GISAXS modelling, spheroidal particles were fitted with an aspect ratio ε=0.7 ± 0.1. The GISAXS model used is described in detail.

  17. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    PubMed

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A snapshot of internal waves and hydrodynamic instabilities in the southern Bay of Bengal

    NASA Astrophysics Data System (ADS)

    Lozovatsky, Iossif; Wijesekera, Hemantha; Jarosz, Ewa; Lilover, Madis-Jaak; Pirro, Annunziata; Silver, Zachariah; Centurioni, Luca; Fernando, H. J. S.

    2016-08-01

    Measurements conducted in the southern Bay of Bengal (BoB) as a part of the ASIRI-EBoB Program portray the characteristics of high-frequency internal waves in the upper pycnocline as well as the velocity structure with episodic events of shear instability. A 20 h time series of CTD, ADCP, and acoustic backscatter profiles down to 150 m as well as temporal CTD measurements in the pycnocline at z = 54 m were taken to the east of Sri Lanka. Internal waves of periods ˜10-40 min were recorded at all depths below a shallow (˜20-30 m) surface mixed layer in the background of an 8 m amplitude internal tide. The absolute values of vertical displacements associated with high-frequency waves followed the Nakagami distribution with a median value of 2.1 m and a 95% quintile 6.5 m. The internal wave amplitudes are normally distributed. The tails of the distribution deviate from normality due to episodic high-amplitude displacements. The sporadic appearance of internal waves with amplitudes exceeding ˜5 m usually coincided with patches of low Richardson numbers, pointing to local shear instability as a possible mechanism of internal-wave-induced turbulence. The probability of shear instability in the summer BoB pycnocline based on an exponential distribution of the inverse Richardson number, however, appears to be relatively low, not exceeding 4% for Ri < 0.25 and about 10% for Ri < 0.36 (K-H billows). The probability of the generation of asymmetric breaking internal waves and Holmboe instabilities is above ˜25%.

  19. Evaluation of induced seismicity forecast models in the Induced Seismicity Test Bench

    NASA Astrophysics Data System (ADS)

    Király, Eszter; Gischig, Valentin; Zechar, Jeremy; Doetsch, Joseph; Karvounis, Dimitrios; Wiemer, Stefan

    2016-04-01

    Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. Here, we propose an Induced Seismicity Test Bench to test and rank such models. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models that incorporate a different mix of physical understanding and stochastic representation of the induced sequences: Shapiro in Space (SiS) and Hydraulics and Seismics (HySei). SiS is based on three pillars: the seismicity rate is computed with help of the seismogenic index and a simple exponential decay of the seismicity; the magnitude distribution follows the Gutenberg-Richter relation; and seismicity is distributed in space based on smoothing seismicity during the learning period with 3D Gaussian kernels. The HySei model describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. Our results show that neither model is fully superior to the other. HySei forecasts the seismicity rate well, but is only mediocre at forecasting the spatial distribution. On the other hand, SiS forecasts the spatial distribution well but not the seismicity rate. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in. Ensemble models that combine HySei's rate forecast with SiS's spatial forecast outperform each individual model.

  20. Momentum distributions for the quantum delta-kicked rotor with decoherence

    PubMed

    Vant; Ball; Christensen

    2000-05-01

    We report on the momentum distribution line shapes for the quantum delta-kicked rotor in the presence of environment induced decoherence. Experimental and numerical results are presented. In the experiment ultracold cesium atoms are subjected to a pulsed standing wave of near resonant light. Spontaneous scattering of photons destroys dynamical localization. For the scattering rates used in our experiment the momentum distribution shapes remain essentially exponential.

  1. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  2. Topics in the Sequential Design of Experiments

    DTIC Science & Technology

    1992-03-01

    decision , unless so designated by other documentation. 12a. DISTRIBUTION /AVAILABIIUTY STATEMENT 12b. DISTRIBUTION CODE Approved for public release...3 0 1992 D 14. SUBJECT TERMS 15. NUMBER OF PAGES12 Design of Experiments, Renewal Theory , Sequential Testing 1 2. PRICE CODE Limit Theory , Local...distributions for one parameter exponential families," by Michael Woodroofe. Stntca, 2 (1991), 91-112. [6] "A non linear renewal theory for a functional of

  3. Statistical properties of effective drought index (EDI) for Seoul, Busan, Daegu, Mokpo in South Korea

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyeok; Kim, Ki-Beom; Chang, Heon-Young

    2014-08-01

    Time series of drought indices has been considered mostly in view of temporal and spatial distributions of a drought index so far. Here we investigate the statistical properties of a daily Effective Drought Index (EDI) itself for Seoul, Busan, Daegu, Mokpo for the period of 100 years from 1913 to 2012. We have found that both in dry and wet seasons the distribution of EDI as a function of EDI follows the Gaussian function. In dry season the shape of the Gaussian function is characteristically broader than that in wet seasons. The total number of drought days during the period we have analyzed is related both to the mean value and more importantly to the standard deviation. We have also found that according to the distribution of the number of occasions where the EDI values of several consecutive days are all less than a threshold, the distribution follows the exponential distribution. The slope of the best fit becomes steeper not only as the critical EDI value becomes more negative but also as the number of consecutive days increases. The slope of the exponential distribution becomes steeper as the number of the city in which EDI is simultaneously less than a critical EDI in a row increases. Finally, we conclude by pointing out implications of our findings.

  4. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. The Mass Distribution of Stellar-mass Black Holes

    NASA Astrophysics Data System (ADS)

    Farr, Will M.; Sravan, Niharika; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya; Kalogera, Vicky

    2011-11-01

    We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M 1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1%; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1%>4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1%>4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass distribution. The results on the combined sample of low- and high-mass systems are in qualitative agreement with Fryer & Kalogera, although the presence of a mass gap remains theoretically unexplained.

  6. Spatial distribution of pH and organic matter in urban soils and its implications on site-specific land uses in Xuzhou, China.

    PubMed

    Mao, Yingming; Sang, Shuxun; Liu, Shiqi; Jia, Jinlong

    2014-05-01

    The spatial variation of soil pH and soil organic matter (SOM) in the urban area of Xuzhou, China, was investigated in this study. Conventional statistics, geostatistics, and a geographical information system (GIS) were used to produce spatial distribution maps and to provide information about land use types. A total of 172 soil samples were collected based on grid method in the study area. Soil pH ranged from 6.47 to 8.48, with an average of 7.62. SOM content was very variable, ranging from 3.51 g/kg to 17.12 g/kg, with an average of 8.26 g/kg. Soil pH followed a normal distribution, while SOM followed a log-normal distribution. The results of semi-variograms indicated that soil pH and SOM had strong (21%) and moderate (44%) spatial dependence, respectively. The variogram model was spherical for soil pH and exponential for SOM. The spatial distribution maps were achieved using kriging interpolation. The high pH and high SOM tended to occur in the mixed forest land cover areas such as those in the southwestern part of the urban area, while the low values were found in the eastern and the northern parts, probably due to the effect of industrial and human activities. In the central urban area, the soil pH was low, but the SOM content was high, which is mainly attributed to the disturbance of regional resident activities and urban transportation. Furthermore, anthropogenic organic particles are possible sources of organic matter after entering the soil ecosystem in urban areas. These maps provide useful information for urban planning and environmental management. Copyright © 2014 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  7. Fast self contained exponential random deviate algorithm

    NASA Astrophysics Data System (ADS)

    Fernández, Julio F.

    1997-03-01

    An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.

  8. Rainbow net analysis of VAXcluster system availability

    NASA Technical Reports Server (NTRS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.

    1991-01-01

    A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.

  9. Concentration variance decay during magma mixing: a volcanic chronometer.

    PubMed

    Perugini, Diego; De Campos, Cristina P; Petrelli, Maurizio; Dingwell, Donald B

    2015-09-21

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing - a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical "mixing to eruption" time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest.

  10. Sediment chronology in San Francisco Bay, California, defined by 210Pb, 234Th, 137Cs, and 239,340Pu

    USGS Publications Warehouse

    Fuller, C.C.; van Geen, Alexander; Baskaran, M.; Anima, R.

    1999-01-01

    Sediment chronologies based on radioisotope depth profiles were developed at two sites in the San Francisco Bay estuary to provide a framework for interpreting historical trends in organic compound and metal contaminant inputs. At Richardson Bay near the estuary mouth, sediments are highly mixed by biological and/or physical processes. Excess  penetration ranged from 2 to more than 10 cm at eight coring sites, yielding surface sediment mixing coefficients ranging from 12 to 170 cm2/year. At the site chosen for contaminant analyses, excess  activity was essentially constant over the upper 25 cm of the core with an exponential decrease below to the supported activity between 70 and 90 cm. Both  and  penetrated to 57-cm depth and have broad subsurface maxima between 33 and 41 cm. The best fit of the excess  profile to a steady state sediment accumulation and mixing model yielded an accumulation rate of 0.825 g/cm2/year (0.89 cm/year at sediment surface), surface mixing coefficient of 71 cm2/year, and 33-cm mixed zone with a half-Gaussian depth dependence parameter of 9 cm. Simulations of  and  profiles using these parameters successfully predicted the maximum depth of penetration and the depth of maximum  and  activity. Profiles of successive 1-year hypothetical contaminant pulses were generated using this parameter set to determine the age distribution of sediments at any depth horizon. Because of mixing, sediment particles with a wide range of deposition dates occur at each depth. A sediment chronology was derived from this age distribution to assign the minimum age of deposition and a date of maximum deposition to a depth horizon. The minimum age of sediments in a given horizon is used to estimate the date of first appearance of a contaminant from its maximum depth of penetration. The date of maximum deposition is used to estimate the peak year of input for a contaminant from the depth interval with the highest concentration of that contaminant. Because of the extensive mixing, sediment-bound constituents are rapidly diluted with older material after deposition. In addition, contaminants persist in the mixed zone for many years after deposition. More than 75 years are required to bury 90% of a deposited contaminant below the mixed zone. Reconstructing contaminant inputs is limited to changes occurring on a 20-year time scale. In contrast, mixing is much lower relative to accumulation at a site in San Pablo Bay. Instead, periods of rapid deposition and/or erosion occurred as indicated by frequent sand-silt laminae in the X-radiograph. , , and excess  activity all penetrated to about 120 cm. The distinct maxima in the fallout radionuclides at 105–110 cm yielded overall linear sedimentation rates of 3.9 to 4.1 cm/year, which are comparable to a rate of 4.5±1.5 cm/year derived from the excess  profile.

  11. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    PubMed

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  12. A Semi-Analytical Extraction Method for Interface and Bulk Density of States in Metal Oxide Thin-Film Transistors

    PubMed Central

    Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao

    2018-01-01

    A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492

  13. A UNIVERSAL NEUTRAL GAS PROFILE FOR NEARBY DISK GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bigiel, F.; Blitz, L., E-mail: bigiel@uni-heidelberg.de

    2012-09-10

    Based on sensitive CO measurements from HERACLES and H I data from THINGS, we show that the azimuthally averaged radial distribution of the neutral gas surface density ({Sigma}{sub HI}+ {Sigma}{sub H2}) in 33 nearby spiral galaxies exhibits a well-constrained universal exponential distribution beyond 0.2 Multiplication-Sign r{sub 25} (inside of which the scatter is large) with less than a factor of two scatter out to two optical radii r{sub 25}. Scaling the radius to r{sub 25} and the total gas surface density to the surface density at the transition radius, i.e., where {Sigma}{sub HI} and {Sigma}{sub H2} are equal, as wellmore » as removing galaxies that are interacting with their environment, yields a tightly constrained exponential fit with average scale length 0.61 {+-} 0.06 r{sub 25}. In this case, the scatter reduces to less than 40% across the optical disks (and remains below a factor of two at larger radii). We show that the tight exponential distribution of neutral gas implies that the total neutral gas mass of nearby disk galaxies depends primarily on the size of the stellar disk (influenced to some degree by the great variability of {Sigma}{sub H2} inside 0.2 Multiplication-Sign r{sub 25}). The derived prescription predicts the total gas mass in our sub-sample of 17 non-interacting disk galaxies to within a factor of two. Given the short timescale over which star formation depletes the H{sub 2} content of these galaxies and the large range of r{sub 25} in our sample, there appears to be some mechanism leading to these largely self-similar radial gas distributions in nearby disk galaxies.« less

  14. Electrostatic screening in classical Coulomb fluids: exponential or power-law decay or both? An investigation into the effect of dispersion interactions

    NASA Astrophysics Data System (ADS)

    Kjellander, Roland

    2006-04-01

    It is shown that the nature of the non-electrostatic part of the pair interaction potential in classical Coulomb fluids can have a profound influence on the screening behaviour. Two cases are compared: (i) when the non-electrostatic part equals an arbitrary finite-ranged interaction and (ii) when a dispersion r-6 interaction potential is included. A formal analysis is done in exact statistical mechanics, including an investigation of the bridge function. It is found that the Coulombic r-1 and the dispersion r-6 potentials are coupled in a very intricate manner as regards the screening behaviour. The classical one-component plasma (OCP) is a particularly clear example due to its simplicity and is investigated in detail. When the dispersion r-6 potential is turned on, the screened electrostatic potential from a particle goes from a monotonic exponential decay, exp(-κr)/r, to a power-law decay, r-8, for large r. The pair distribution function acquire, at the same time, an r-10 decay for large r instead of the exponential one. There still remains exponentially decaying contributions to both functions, but these contributions turn oscillatory when the r-6 interaction is switched on. When the Coulomb interaction is turned off but the dispersion r-6 pair potential is kept, the decay of the pair distribution function for large r goes over from the r-10 to an r-6 behaviour, which is the normal one for fluids of electroneutral particles with dispersion interactions. Differences and similarities compared to binary electrolytes are pointed out.

  15. In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.

    PubMed

    Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A

    2018-04-01

    Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.

  16. Formal Methods for Cryptographic Protocol Analysis: Emerging Issues and Trends

    DTIC Science & Technology

    2003-01-01

    signatures , which depend upon the homomor- phic properties of RSA. Other algorithms and data structures, such as Chaum mixes [17], designed for...Communications Security, pages 176–185. ACM, Novem- ber 2001. [17] D. Chaum . Untraceable electronic mail, return addresses and digital signatures ...something like the Diffie- Hellman algorithm, which depends, as a minimum, on the commutative properties of exponentiation, or something like Chaum’s blinded

  17. Strain energy release rates of composite interlaminar end-notch and mixed-mode fracture: A sublaminate/ply level analysis and a computer code

    NASA Technical Reports Server (NTRS)

    Valisetty, R. R.; Chamis, C. C.

    1987-01-01

    A computer code is presented for the sublaminate/ply level analysis of composite structures. This code is useful for obtaining stresses in regions affected by delaminations, transverse cracks, and discontinuities related to inherent fabrication anomalies, geometric configurations, and loading conditions. Particular attention is focussed on those layers or groups of layers (sublaminates) which are immediately affected by the inherent flaws. These layers are analyzed as homogeneous bodies in equilibrium and in isolation from the rest of the laminate. The theoretical model used to analyze the individual layers allows the relevant stresses and displacements near discontinuities to be represented in the form of pure exponential-decay-type functions which are selected to eliminate the exponential-precision-related difficulties in sublaminate/ply level analysis. Thus, sublaminate analysis can be conducted without any restriction on the maximum number of layers, delaminations, transverse cracks, or other types of discontinuities. In conjunction with the strain energy release rate (SERR) concept and composite micromechanics, this computational procedure is used to model select cases of end-notch and mixed-mode fracture specimens. The computed stresses are in good agreement with those from a three-dimensional finite element analysis. Also, SERRs compare well with limited available experimental data.

  18. 3H/3He age data in assessing the susceptibility of wells to contamination

    USGS Publications Warehouse

    Manning, Andrew H.; Solomon, D. Kip; Thiros, Susan A.

    2005-01-01

    Regulatory agencies are becoming increasingly interested in using young–ground water dating techniques, such as the 3H/3He method, in assessing the susceptibility of public supply wells (PSWs) to contamination. However, recent studies emphasize that ground water samples of mixed age may be the norm, particularly from long-screened PSWs, and tracer-based “apparent” ages can differ substantially from actual mean ages for mixed-age samples. We present age and contaminant data from PSWs in Salt Lake Valley, Utah, that demonstrate the utility of 3H and 3He measurements in evaluating well susceptibility, despite potential age mixing. Initial 3H concentrations (measured 3H + measured tritiogenic 3He) are compared to those expected based on the apparent 3H/3He age and the local precipitation 3H record. This comparison is used to determine the amount of modern water (recharged after ∼1950) vs. prebomb water (recharged before ∼1950) samples might contain. Concentrations of common contaminants were also measured using detection limits generally lower than those used for regulatory purposes. A clear correlation exists between the potential magnitude of the modern water fraction and both the occurrence and concentration of contaminants. For samples containing dominantly modern water based on their initial 3H concentrations, potential discrepancies between apparent 3H/3He ages and mean ages are explored using synthetic samples that are random mixtures of different modern waters. Apparent ages can exceed mean ages by up to 13 years for these samples, with an exponential age distribution resulting in the greatest discrepancies.

  19. Towards enhancing and delaying disturbances in free shear flows

    NASA Technical Reports Server (NTRS)

    Criminale, W. O.; Jackson, T. L.; Lasseigne, D. G.

    1994-01-01

    The family of shear flows comprising the jet, wake, and the mixing layer are subjected to perturbations in an inviscid incompressible fluid. By modeling the basic mean flows as parallel with piecewise linear variations for the velocities, complete and general solutions to the linearized equations of motion can be obtained in closed form as functions of all space variables and time when posed as an initial value problem. The results show that there is a continuous as well as the discrete spectrum that is more familiar in stability theory and therefore there can be both algebraic and exponential growth of disturbances in time. These bases make it feasible to consider control of such flows. To this end, the possibility of enhancing the disturbances in the mixing layer and delaying the onset in the jet and wake is investigated. It is found that growth of perturbations can be delayed to a considerable degree for the jet and the wake but, by comparison, cannot be enhanced in the mixing layer. By using moving coordinates, a method for demonstrating the predominant early and long time behavior of disturbances in these flows is given for continuous velocity profiles. It is shown that the early time transients are always algebraic whereas the asymptotic limit is that of an exponential normal mode. Numerical treatment of the new governing equations confirm the conclusions reached by use of the piecewise linear basic models. Although not pursued here, feedback mechanisms designed for control of the flow could be devised using the results of this work.

  20. New method to calculate the N2 evolution from mixed venous blood during the N2 washout.

    PubMed

    Han, D; Jeng, D R; Cruz, J C; Flores, X F; Mallea, J M

    2001-08-01

    To model the normalized phase III slope (Sn) from N2 expirograms of the multibreath N2 washout is a challenge to researchers. Experimental measurements show that Sn increases with the number of breaths. Previously, we predicted Sn by setting the concentration (atm) of mixed venous blood (Fbi,N2) to a constant value of 0.3 after the fifth breath to calculate the amount of N2 transferred from the blood to the alveoli. As a consequence, the predicted curve of the Sn values showed a maximum before the quasi-steady state was reached. In this paper, we present a way of calculating the amount of N2 transferred from the blood to the alveoli by setting Fbi,N2 in the following way: In the first six breaths Fbi,N2 is kept constant at the initial value of 0.8 because circulation time needs at least 30 s to alter it. Thereafter, a single exponential function with respect the number of breaths is used: Fbi = 0.8 exp[0.112(6-n)], in which n is the breath number. The predicted Sn values were compared with experimental data from the literature. The assumption of an exponential decay in the N2 evolved from mixed venous blood is important in determining the shape of the Sn curve but new experimental data are needed to determine the validity of the model. We concluded that this new approach to calculate the N2 evolution from the blood is more meaningful physiologically.

  1. A numerical study of mixing in stationary, nonpremixed, turbulent reacting flows

    NASA Astrophysics Data System (ADS)

    Overholt, Matthew Ryan

    1998-10-01

    In this work a detailed numerical study is made of a statistically-stationary, non-premixed, turbulent reacting model flow known as Periodic Reaction Zones. The mixture fraction-progress variable approach is used, with a mean gradient in the mixture fraction and a model, single-step, reversible, finite-rate thermochemistry, yielding both stationary and local extinction behavior. The passive scalar is studied first, using a statistical forcing scheme to achieve stationarity of the velocity field. Multiple independent direct numerical simulations (DNS) are performed for a wide range of Reynolds numbers with a number of results including a bilinear model for scalar mixing jointly conditioned on the scalar and x2-component of velocity, Gaussian scalar probability density function tails which were anticipated to be exponential, and the quantification of the dissipation of scalar flux. A new deterministic forcing scheme for DNS is then developed which yields reduced fluctuations in many quantities and a more natural evolution of the velocity fields. This forcing method is used for the final portion of this work. DNS results for Periodic Reaction Zones are compared with the Conditional Moment Closure (CMC) model, the Quasi-Equilibrium Distributed Reaction (QEDR) model, and full probability density function (PDF) simulations using the Euclidean Minimum Spanning Tree (EMST) and the Interaction by Exchange with the Mean (IEM) mixing models. It is shown that CMC and QEDR results based on the local scalar dissipation match DNS wherever local extinction is not present. However, due to the large spatial variations of scalar dissipation, and hence local Damkohler number, local extinction is present even when the global Damkohler number is twenty-five times the critical value for extinction. Finally, in the PDF simulations the EMST mixing model closely reproduces CMC and DNS results when local extinction is not present, whereas the IEM model results in large error.

  2. How extreme was the October 2015 flood in the Carolinas? An assessment of flood frequency analysis and distribution tails

    NASA Astrophysics Data System (ADS)

    Phillips, R. C.; Samadi, S. Z.; Meadows, M. E.

    2018-07-01

    This paper examines the frequency, distribution tails, and peak-over-threshold (POT) of extreme floods through analysis that centers on the October 2015 flooding in North Carolina (NC) and South Carolina (SC), United States (US). The most striking features of the October 2015 flooding were a short time to peak (Tp) and a multi-hour continuous flood peak which caused intensive and widespread damages to human lives, properties, and infrastructure. The 2015 flooding was produced by a sequence of intense rainfall events which originated from category 4 hurricane Joaquin over a period of four days. Here, the probability distribution and distribution parameters (i.e., location, scale, and shape) of floods were investigated by comparing the upper part of empirical distributions of the annual maximum flood (AMF) and POT with light- to heavy- theoretical tails: Fréchet, Pareto, Gumbel, Weibull, Beta, and Exponential. Specifically, four sets of U.S. Geological Survey (USGS) gauging data from the central Carolinas with record lengths from approximately 65-125 years were used. Analysis suggests that heavier-tailed distributions are in better agreement with the POT and somewhat AMF data than more often used exponential (light) tailed probability distributions. Further, the threshold selection and record length affect the heaviness of the tail and fluctuations of the parent distributions. The shape parameter and its evolution in the period of record play a critical and poorly understood role in determining the scaling of flood response to intense rainfall.

  3. Reactor Statics Module, RS-9: Multigroup Diffusion Program Using an Exponential Acceleration Technique.

    ERIC Educational Resources Information Center

    Macek, Victor C.

    The nine Reactor Statics Modules are designed to introduce students to the use of numerical methods and digital computers for calculation of neutron flux distributions in space and energy which are needed to calculate criticality, power distribution, and fuel burnup for both slow neutron and fast neutron fission reactors. The last module, RS-9,…

  4. Distinguishing Response Conflict and Task Conflict in the Stroop Task: Evidence from Ex-Gaussian Distribution Analysis

    ERIC Educational Resources Information Center

    Steinhauser, Marco; Hubner, Ronald

    2009-01-01

    It has been suggested that performance in the Stroop task is influenced by response conflict as well as task conflict. The present study investigated the idea that both conflict types can be isolated by applying ex-Gaussian distribution analysis which decomposes response time into a Gaussian and an exponential component. Two experiments were…

  5. K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents

    ERIC Educational Resources Information Center

    Gwanyama, Philip Wagala

    2005-01-01

    The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…

  6. Visibility graph analysis on quarterly macroeconomic series of China based on complex network theory

    NASA Astrophysics Data System (ADS)

    Wang, Na; Li, Dong; Wang, Qiwen

    2012-12-01

    The visibility graph approach and complex network theory provide a new insight into time series analysis. The inheritance of the visibility graph from the original time series was further explored in the paper. We found that degree distributions of visibility graphs extracted from Pseudo Brownian Motion series obtained by the Frequency Domain algorithm exhibit exponential behaviors, in which the exponential exponent is a binomial function of the Hurst index inherited in the time series. Our simulations presented that the quantitative relations between the Hurst indexes and the exponents of degree distribution function are different for different series and the visibility graph inherits some important features of the original time series. Further, we convert some quarterly macroeconomic series including the growth rates of value-added of three industry series and the growth rates of Gross Domestic Product series of China to graphs by the visibility algorithm and explore the topological properties of graphs associated from the four macroeconomic series, namely, the degree distribution and correlations, the clustering coefficient, the average path length, and community structure. Based on complex network analysis we find degree distributions of associated networks from the growth rates of value-added of three industry series are almost exponential and the degree distributions of associated networks from the growth rates of GDP series are scale free. We also discussed the assortativity and disassortativity of the four associated networks as they are related to the evolutionary process of the original macroeconomic series. All the constructed networks have “small-world” features. The community structures of associated networks suggest dynamic changes of the original macroeconomic series. We also detected the relationship among government policy changes, community structures of associated networks and macroeconomic dynamics. We find great influences of government policies in China on the changes of dynamics of GDP and the three industries adjustment. The work in our paper provides a new way to understand the dynamics of economic development.

  7. The Lunar Rock Size Frequency Distribution from Diviner Infrared Measurements

    NASA Astrophysics Data System (ADS)

    Elder, C. M.; Hayne, P. O.; Piqueux, S.; Bandfield, J.; Williams, J. P.; Ghent, R. R.; Paige, D. A.

    2016-12-01

    Knowledge of the rock size frequency distribution on a planetary body is important for understanding its geologic history and for selecting landing sites. The rock size frequency distribution can be estimated by counting rocks in high resolution images, but most bodies in the solar system have limited areas with adequate coverage. We propose an alternative method to derive and map rock size frequency distributions using multispectral thermal infrared data acquired at multiple times during the night. We demonstrate this new technique for the Moon using data from the Lunar Reconnaissance Orbiter (LRO) Diviner radiometer in conjunction with three dimensional thermal modeling, leveraging the differential cooling rates of different rock sizes. We assume an exponential rock size frequency distribution, which has been shown to yield a good fit to rock populations in various locations on the Moon, Mars, and Earth [2, 3] and solve for the best radiance fits as a function of local time and wavelength. This method presents several advantages: 1) unlike other thermally derived rock abundance techniques, it is sensitive to rocks smaller than the diurnal skin depth; 2) it does not result in apparent decrease in rock abundance at night; and 3) it can be validated using images taken at the lunar surface. This method yields both the fraction of the surface covered in rocks of all sizes and the exponential factor, which defines the rate of drop-off in the exponential function at large rock sizes. We will present maps of both these parameters for the Moon, and provide a geological interpretation. In particular, this method reveals rocks in the lunar highlands that are smaller than previous thermal methods could detect. [1] Bandfield J. L. et al. (2011) JGR, 116, E00H02. [2] Golombek and Rapp (1997) JGR, 102, E2, 4117-4129. [3] Cintala, M.J. and K.M. McBride (1995) NASA Technical Memorandum 104804.

  8. Hazard function analysis for flood planning under nonstationarity

    NASA Astrophysics Data System (ADS)

    Read, Laura K.; Vogel, Richard M.

    2016-05-01

    The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.

  9. Impulsive synchronization of stochastic reaction-diffusion neural networks with mixed time delays.

    PubMed

    Sheng, Yin; Zeng, Zhigang

    2018-07-01

    This paper discusses impulsive synchronization of stochastic reaction-diffusion neural networks with Dirichlet boundary conditions and hybrid time delays. By virtue of inequality techniques, theories of stochastic analysis, linear matrix inequalities, and the contradiction method, sufficient criteria are proposed to ensure exponential synchronization of the addressed stochastic reaction-diffusion neural networks with mixed time delays via a designed impulsive controller. Compared with some recent studies, the neural network models herein are more general, some restrictions are relaxed, and the obtained conditions enhance and generalize some published ones. Finally, two numerical simulations are performed to substantiate the validity and merits of the developed theoretical analysis. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Vertical variation of mixing within porous sediment beds below turbulent flows

    PubMed Central

    Chandler, I. D.; Pearson, J. M.; van Egmond, R.

    2016-01-01

    Abstract River ecosystems are influenced by contaminants in the water column, in the pore water and adsorbed to sediment particles. When exchange across the sediment‐water interface (hyporheic exchange) is included in modeling, the mixing coefficient is often assumed to be constant with depth below the interface. Novel fiber‐optic fluorometers have been developed and combined with a modified EROSIMESS system to quantify the vertical variation in mixing coefficient with depth below the sediment‐water interface. The study considered a range of particle diameters and bed shear velocities, with the permeability Péclet number, PeK between 1000 and 77,000 and the shear Reynolds number, Re*, between 5 and 600. Different parameterization of both an interface exchange coefficient and a spatially variable in‐sediment mixing coefficient are explored. The variation of in‐sediment mixing is described by an exponential function applicable over the full range of parameter combinations tested. The empirical relationship enables estimates of the depth to which concentrations of pollutants will penetrate into the bed sediment, allowing the region where exchange will occur faster than molecular diffusion to be determined. PMID:27635104

  11. Hexamethylbenzene as a sensitive nuclear magnetic resonance probe for studying organic crystals and glasses

    NASA Astrophysics Data System (ADS)

    Jansen-Glaw, B.; Rössler, E.; Taupitz, M.; Vieth, H. M.

    1989-06-01

    Deuterated hexamethylbenzene (HMB) is used as a probe molecule for 2H NMR studies of the crystalline state of hexachlorobenzene and of several organic glasses. By measuring the spin-lattice relaxation and the line shape in the temperature range of 4-300 K the dynamical parameters of the molecular reorientation are investigated. For the system HMB/hexachlorobenzene, we find exponential relaxation and for the corresponding T1 an increase of its activation energy by a factor of 2 in comparison to the neat HMB. A homogeneous mixing of the guest and host molecules is found at least for guest concentrations up to 7%. In contrast, nonexponential spin-lattice relaxation is characteristic for all glass matrices, indicating motional heterogeneities. A log-Gauss distribution for the corresponding motional correlation times gives a good fit of the data. Its width parameter decreases linearly with temperature, while the mean correlation times are described by an Arrhenius law. The mean activation energy is reduced by a factor of about 3.5 as compared to neat HMB, demonstrating a loose packing of the molecules in the glass matrices.

  12. Dielectric properties of PVDF/0.5(Ba0.7Ca0.3)TiO3-0.5Ba(Zr0.2Ti0.8)O3 composites

    NASA Astrophysics Data System (ADS)

    Pandey, Bablu K.; Chandra, K. P.; Kolte, Jayant; Kulkarni, A. R.; Jayaswal, S. K.; Prasad, K.

    2018-05-01

    Ceramic powder of 0.50(Ba0.7Ca0.3)TiO3-0.50Ba(Zr0.2Ti0.8)O3(BCZT50) at morphotropic phase boundary composition was prepared usingsolid-statesynthesis technique followed by extensive high energy ball milling. The crystal symmetry, space group and unit cell dimensions were determined from the X-raydiffraction data of BCZT50 using FullProf software andthe average crystallite size was estimated using Williamson-Hall approach. FTIR spectra confirmed the formation of perovskite type solid solutions. The prepared ceramic powder was utilized to prepare lead-free (1- x)PVDF/xBCZT50 ceramic-polymer composites with x = 0.025, 0.05, 0.10, 0.15, 0.20, 0.25 were prepared using melt- mixing technique. The distribution of BCZT50 particles in the PVDF matrix were examined using anoptical microscope. Filler concentration dependent real and imaginary parts of dielectric constant data followed exponential growth types of variation. The low value of tanδ(˜10-2) can be advantageous forsensing/detectionapplications.

  13. Convergent chaos

    NASA Astrophysics Data System (ADS)

    Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael

    2017-07-01

    Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.

  14. Experimental entanglement purification of arbitrary unknown states.

    PubMed

    Pan, Jian-Wei; Gasparoni, Sara; Ursin, Rupert; Weihs, Gregor; Zeilinger, Anton

    2003-05-22

    Distribution of entangled states between distant locations is essential for quantum communication over large distances. But owing to unavoidable decoherence in the quantum communication channel, the quality of entangled states generally decreases exponentially with the channel length. Entanglement purification--a way to extract a subset of states of high entanglement and high purity from a large set of less entangled states--is thus needed to overcome decoherence. Besides its important application in quantum communication, entanglement purification also plays a crucial role in error correction for quantum computation, because it can significantly increase the quality of logic operations between different qubits. Here we demonstrate entanglement purification for general mixed states of polarization-entangled photons using only linear optics. Typically, one photon pair of fidelity 92% could be obtained from two pairs, each of fidelity 75%. In our experiments, decoherence is overcome to the extent that the technique would achieve tolerable error rates for quantum repeaters in long-distance quantum communication. Our results also imply that the requirement of high-accuracy logic operations in fault-tolerant quantum computation can be considerably relaxed.

  15. The Universal Statistical Distributions of the Affinity, Equilibrium Constants, Kinetics and Specificity in Biomolecular Recognition

    PubMed Central

    Zheng, Xiliang; Wang, Jin

    2015-01-01

    We uncovered the universal statistical laws for the biomolecular recognition/binding process. We quantified the statistical energy landscapes for binding, from which we can characterize the distributions of the binding free energy (affinity), the equilibrium constants, the kinetics and the specificity by exploring the different ligands binding with a particular receptor. The results of the analytical studies are confirmed by the microscopic flexible docking simulations. The distribution of binding affinity is Gaussian around the mean and becomes exponential near the tail. The equilibrium constants of the binding follow a log-normal distribution around the mean and a power law distribution in the tail. The intrinsic specificity for biomolecular recognition measures the degree of discrimination of native versus non-native binding and the optimization of which becomes the maximization of the ratio of the free energy gap between the native state and the average of non-native states versus the roughness measured by the variance of the free energy landscape around its mean. The intrinsic specificity obeys a Gaussian distribution near the mean and an exponential distribution near the tail. Furthermore, the kinetics of binding follows a log-normal distribution near the mean and a power law distribution at the tail. Our study provides new insights into the statistical nature of thermodynamics, kinetics and function from different ligands binding with a specific receptor or equivalently specific ligand binding with different receptors. The elucidation of distributions of the kinetics and free energy has guiding roles in studying biomolecular recognition and function through small-molecule evolution and chemical genetics. PMID:25885453

  16. Simple, accurate formula for the average bit error probability of multiple-input multiple-output free-space optical links over negative exponential turbulence channels.

    PubMed

    Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas

    2012-08-01

    In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.

  17. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  18. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  19. Non-Markovian Infection Spread Dramatically Alters the Susceptible-Infected-Susceptible Epidemic Threshold in Networks

    NASA Astrophysics Data System (ADS)

    Van Mieghem, P.; van de Bovenkamp, R.

    2013-03-01

    Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.

  20. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  1. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  2. Probability distribution of financial returns in a model of multiplicative Brownian motion with stochastic diffusion coefficient

    NASA Astrophysics Data System (ADS)

    Silva, Antonio

    2005-03-01

    It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225

  3. The competitiveness versus the wealth of a country.

    PubMed

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y; Stanley, H Eugene

    2012-01-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008-2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers.

  4. Study of velocity and temperature distributions in boundary layer flow of fourth grade fluid over an exponential stretching sheet

    NASA Astrophysics Data System (ADS)

    Khan, Najeeb Alam; Saeed, Umair Bin; Sultan, Faqiha; Ullah, Saif; Rehman, Abdul

    2018-02-01

    This study deals with the investigation of boundary layer flow of a fourth grade fluid and heat transfer over an exponential stretching sheet. For analyzing two heating processes, namely, (i) prescribed surface temperature (PST), and (ii) prescribed heat flux (PHF), the temperature distribution in a fluid has been considered. The suitable transformations associated with the velocity components and temperature, have been employed for reducing the nonlinear model equation to a system of ordinary differential equations. The flow and temperature fields are revealed by solving these reduced nonlinear equations through an effective analytical method. The important findings in this analysis are to observe the effects of viscoelastic, cross-viscous, third grade fluid, and fourth grade fluid parameters on the constructed analytical expression for velocity profile. Likewise, the heat transfer properties are studied for Prandtl and Eckert numbers.

  5. The competitiveness versus the wealth of a country

    PubMed Central

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y.; Stanley, H. Eugene

    2012-01-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008–2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers. PMID:22997552

  6. Inter-occurrence times and universal laws in finance, earthquakes and genomes

    NASA Astrophysics Data System (ADS)

    Tsallis, Constantino

    2016-07-01

    A plethora of natural, artificial and social systems exist which do not belong to the Boltzmann-Gibbs (BG) statistical-mechanical world, based on the standard additive entropy $S_{BG}$ and its associated exponential BG factor. Frequent behaviors in such complex systems have been shown to be closely related to $q$-statistics instead, based on the nonadditive entropy $S_q$ (with $S_1=S_{BG}$), and its associated $q$-exponential factor which generalizes the usual BG one. In fact, a wide range of phenomena of quite different nature exist which can be described and, in the simplest cases, understood through analytic (and explicit) functions and probability distributions which exhibit some universal features. Universality classes are concomitantly observed which can be characterized through indices such as $q$. We will exhibit here some such cases, namely concerning the distribution of inter-occurrence (or inter-event) times in the areas of finance, earthquakes and genomes.

  7. The competitiveness versus the wealth of a country

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatić, Davor; Kenett, Dror Y.; Stanley, H. Eugene

    2012-09-01

    Politicians world-wide frequently promise a better life for their citizens. We find that the probability that a country will increase its per capita GDP (gdp) rank within a decade follows an exponential distribution with decay constant λ = 0.12. We use the Corruption Perceptions Index (CPI) and the Global Competitiveness Index (GCI) and find that the distribution of change in CPI (GCI) rank follows exponential functions with approximately the same exponent as λ, suggesting that the dynamics of gdp, CPI, and GCI may share the same origin. Using the GCI, we develop a new measure, which we call relative competitiveness, to evaluate an economy's competitiveness relative to its gdp. For all European and EU countries during the 2008-2011 economic downturn we find that the drop in gdp in more competitve countries relative to gdp was substantially smaller than in relatively less competitive countries, which is valuable information for policymakers.

  8. Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets

    NASA Astrophysics Data System (ADS)

    Kumagai, Yoshiaki

    2010-04-01

    In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.

  9. Geomorphic effectiveness of long profile shape and role of inherent geological controls, Ganga River Basin, India

    NASA Astrophysics Data System (ADS)

    Sonam, Sonam; Jain, Vikrant

    2017-04-01

    River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.

  10. Mechanical analysis of non-uniform bi-directional functionally graded intelligent micro-beams using modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Bakhshi Khaniki, Hossein; Rajasekaran, Sundaramoorthy

    2018-05-01

    This study develops a comprehensive investigation on mechanical behavior of non-uniform bi-directional functionally graded beam sensors in the framework of modified couple stress theory. Material variation is modelled through both length and thickness directions using power-law, sigmoid and exponential functions. Moreover, beam is assumed with linear, exponential and parabolic cross-section variation through the length using power-law and sigmoid varying functions. Using these assumptions, a general model for microbeams is presented and formulated by employing Hamilton’s principle. Governing equations are solved using a mixed finite element method with Lagrangian interpolation technique, Gaussian quadrature method and Wilson’s Lagrangian multiplier method. It is shown that by using bi-directional functionally graded materials in nonuniform microbeams, mechanical behavior of such structures could be affected noticeably and scale parameter has a significant effect in changing the rigidity of nonuniform bi-directional functionally graded beams.

  11. New and practical mathematical model of membrane fouling in an aerobic submerged membrane bioreactor.

    PubMed

    Zuthi, Mst Fazana Rahman; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Duc Long; Hai, Faisal I; Xia, Siqing; Li, Jianxin; Li, Jixiang; Liu, Yi

    2017-08-01

    This study aimed to develop a practical semi-empirical mathematical model of membrane fouling that accounts for cake formation on the membrane and its pore blocking as the major processes of membrane fouling. In the developed model, the concentration of mixed liquor suspended solid is used as a lumped parameter to describe the formation of cake layer including the biofilm. The new model considers the combined effect of aeration and backwash on the foulants' detachment from the membrane. New exponential coefficients are also included in the model to describe the exponential increase of transmembrane pressure that typically occurs after the initial stage of an MBR operation. The model was validated using experimental data obtained from a lab-scale aerobic sponge-submerged membrane bioreactor (MBR), and the simulation of the model agreed well with the experimental findings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Cell responses to single pheromone molecules may reflect the activation kinetics of olfactory receptor molecules.

    PubMed

    Minor, A V; Kaissling, K-E

    2003-03-01

    Olfactory receptor cells of the silkmoth Bombyx mori respond to single pheromone molecules with "elementary" electrical events that appear as discrete "bumps" a few milliseconds in duration, or bursts of bumps. As revealed by simulation, one bump may result from a series of random openings of one or several ion channels, producing an average inward membrane current of 1.5 pA. The distributions of durations of bumps and of gaps between bumps in a burst can be fitted by single exponentials with time constants of 10.2 ms and 40.5 ms, respectively. The distribution of burst durations is a sum of two exponentials; the number of bumps per burst obeyed a geometric distribution (mean 3.2 bumps per burst). Accordingly the elementary events could reflect transitions among three states of the pheromone receptor molecule: the vacant receptor (state 1), the pheromone-receptor complex (state 2), and the activated complex (state 3). The calculated rate constants of the transitions between states are k(21)=7.7 s(-1), k(23)=16.8 s(-1), and k(32)=98 s(-1).

  13. The stationary non-equilibrium plasma of cosmic-ray electrons and positrons

    NASA Astrophysics Data System (ADS)

    Tomaschitz, Roman

    2016-06-01

    The statistical properties of the two-component plasma of cosmic-ray electrons and positrons measured by the AMS-02 experiment on the International Space Station and the HESS array of imaging atmospheric Cherenkov telescopes are analyzed. Stationary non-equilibrium distributions defining the relativistic electron-positron plasma are derived semi-empirically by performing spectral fits to the flux data and reconstructing the spectral number densities of the electronic and positronic components in phase space. These distributions are relativistic power-law densities with exponential cutoff, admitting an extensive entropy variable and converging to the Maxwell-Boltzmann or Fermi-Dirac distributions in the non-relativistic limit. Cosmic-ray electrons and positrons constitute a classical (low-density high-temperature) plasma due to the low fugacity in the quantized partition function. The positron fraction is assembled from the flux densities inferred from least-squares fits to the electron and positron spectra and is subjected to test by comparing with the AMS-02 flux ratio measured in the GeV interval. The calculated positron fraction extends to TeV energies, predicting a broad spectral peak at about 1 TeV followed by exponential decay.

  14. A new probability distribution model of turbulent irradiance based on Born perturbation theory

    NASA Astrophysics Data System (ADS)

    Wang, Hongxing; Liu, Min; Hu, Hao; Wang, Qian; Liu, Xiguo

    2010-10-01

    The subject of the PDF (Probability Density Function) of the irradiance fluctuations in a turbulent atmosphere is still unsettled. Theory reliably describes the behavior in the weak turbulence regime, but theoretical description in the strong and whole turbulence regimes are still controversial. Based on Born perturbation theory, the physical manifestations and correlations of three typical PDF models (Rice-Nakagami, exponential-Bessel and negative-exponential distribution) were theoretically analyzed. It is shown that these models can be derived by separately making circular-Gaussian, strong-turbulence and strong-turbulence-circular-Gaussian approximations in Born perturbation theory, which denies the viewpoint that the Rice-Nakagami model is only applicable in the extremely weak turbulence regime and provides theoretical arguments for choosing rational models in practical applications. In addition, a common shortcoming of the three models is that they are all approximations. A new model, called the Maclaurin-spread distribution, is proposed without any approximation except for assuming the correlation coefficient to be zero. So, it is considered that the new model can exactly reflect the Born perturbation theory. Simulated results prove the accuracy of this new model.

  15. Effects of topologies on signal propagation in feedforward networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  16. Effects of topologies on signal propagation in feedforward networks.

    PubMed

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  17. A study of personal income distributions in Australia and Italy

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Yakovenko, Victor

    2006-03-01

    The study of income distribution has a long history. A century ago, the Italian physicist and economist Pareto proposed that income distribution obeys a universal power law, valid for all time and countries. Subsequent studies proved that only the top 1-3% of the population follow a power law. For USA, the rest 97-99% of the population follow the exponential distribution [1]. We present the results of a similar study for Australia and Italy. [1] A. C. Silva and V. M. Yakovenko, Europhys. Lett.69, 304 (2005).

  18. Learning Search Control Knowledge for Deep Space Network Scheduling

    NASA Technical Reports Server (NTRS)

    Gratch, Jonathan; Chien, Steve; DeJong, Gerald

    1993-01-01

    While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.

  19. Inland empire logistics GIS mapping project.

    DOT National Transportation Integrated Search

    2009-01-01

    The Inland Empire has experienced exponential growth in the area of warehousing and distribution facilities within the last decade and it seems that it will continue way into the future. Where are these facilities located? How large are the facilitie...

  20. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  1. Regionalizing nonparametric models of precipitation amounts on different temporal scales

    NASA Astrophysics Data System (ADS)

    Mosthaf, Tobias; Bárdossy, András

    2017-05-01

    Parametric distribution functions are commonly used to model precipitation amounts corresponding to different durations. The precipitation amounts themselves are crucial for stochastic rainfall generators and weather generators. Nonparametric kernel density estimates (KDEs) offer a more flexible way to model precipitation amounts. As already stated in their name, these models do not exhibit parameters that can be easily regionalized to run rainfall generators at ungauged locations as well as at gauged locations. To overcome this deficiency, we present a new interpolation scheme for nonparametric models and evaluate it for different temporal resolutions ranging from hourly to monthly. During the evaluation, the nonparametric methods are compared to commonly used parametric models like the two-parameter gamma and the mixed-exponential distribution. As water volume is considered to be an essential parameter for applications like flood modeling, a Lorenz-curve-based criterion is also introduced. To add value to the estimation of data at sub-daily resolutions, we incorporated the plentiful daily measurements in the interpolation scheme, and this idea was evaluated. The study region is the federal state of Baden-Württemberg in the southwest of Germany with more than 500 rain gauges. The validation results show that the newly proposed nonparametric interpolation scheme provides reasonable results and that the incorporation of daily values in the regionalization of sub-daily models is very beneficial.

  2. Getting the tail to wag the dog: Incorporating groundwater transport into catchment solute transport models using rank StorAge Selection (rSAS) functions

    NASA Astrophysics Data System (ADS)

    Harman, C. J.

    2015-12-01

    Surface water hydrologic models are increasingly used to analyze the transport of solutes through the landscape, such as nitrate. However, many of these models cannot adequately capture the effect of groundwater flow paths, which can have long travel times and accumulate legacy contaminants, releasing them to streams over decades. If these long lag times are not accounted for, the short-term efficacy of management activities to reduce nitrogen loads may be overestimated. Models that adopt a simple 'well-mixed' assumption, leading to an exponential transit time distribution at steady state, cannot adequately capture the broadly skewed nature of groundwater transit times in typical watersheds. Here I will demonstrate how StorAge Selection functions can be used to capture the long lag times of groundwater in a typical subwatershed-based hydrologic model framework typical of models like SWAT, HSPF, HBV, PRMS and others. These functions can be selected and calibrated to reproduce historical data where available, but can also be fitted to the results of a steady-state groundwater transport model like MODFLOW/MODPATH, allowing those results to directly inform the parameterization of an unsteady surface water model. The long tails of the transit time distribution predicted by the groundwater model can then be completely captured by the surface water model. Examples of this application in the Chesapeake Bay watersheds and elsewhere will be given.

  3. Theoretical Studies of Spectroscopic Line Mixing in Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Ma, Q.

    2015-12-01

    The phenomenon of collisional transfer of intensity due to line mixing has an increasing importance for atmospheric monitoring. From a theoretical point of view, all relevant information about the collisional processes is contained in the relaxation matrix where the diagonal elements give half-widths and shifts, and the off-diagonal elements correspond to line interferences. For simple systems such as those consisting of diatom-atom or diatom-diatom, accurate fully quantum calculations based on interaction potentials are feasible. However, fully quantum calculations become unrealistic for more complex systems. On the other hand, the semi-classical Robert-Bonamy (RB) formalism, which has been widely used to calculate half-widths and shifts for decades, fails in calculating the off-diagonal matrix elements. As a result, in order to simulate atmospheric spectra where the effects from line mixing are important, semi-empirical fitting or scaling laws such as the ECS and IOS models are commonly used. Recently, while scrutinizing the development of the RB formalism, we have found that these authors applied the isolated line approximation in their evaluating matrix elements of the Liouville scattering operator given in exponential form. Since the criterion of this assumption is so stringent, it is not valid for many systems of interest in atmospheric applications. Furthermore, it is this assumption that blocks the possibility to calculate the whole relaxation matrix at all. By eliminating this unjustified application, and accurately evaluating matrix elements of the exponential operators, we have developed a more capable formalism. With this new formalism, we are now able not only to reduce uncertainties for calculated half-widths and shifts, but also to remove a once insurmountable obstacle to calculate the whole relaxation matrix. This implies that we can address the line mixing with the semi-classical theory based on interaction potentials between molecular absorber and molecular perturber. We have applied this formalism to address the line mixing for Raman and infrared spectra of molecules such as N2, C2H2, CO2, NH3, and H2O. By carrying out rigorous calculations, our calculated relaxation matrices are in good agreement with both experimental data and results derived from the ECS model.

  4. Concentration variance decay during magma mixing: a volcanic chronometer

    PubMed Central

    Perugini, Diego; De Campos, Cristina P.; Petrelli, Maurizio; Dingwell, Donald B.

    2015-01-01

    The mixing of magmas is a common phenomenon in explosive eruptions. Concentration variance is a useful metric of this process and its decay (CVD) with time is an inevitable consequence during the progress of magma mixing. In order to calibrate this petrological/volcanological clock we have performed a time-series of high temperature experiments of magma mixing. The results of these experiments demonstrate that compositional variance decays exponentially with time. With this calibration the CVD rate (CVD-R) becomes a new geochronometer for the time lapse from initiation of mixing to eruption. The resultant novel technique is fully independent of the typically unknown advective history of mixing – a notorious uncertainty which plagues the application of many diffusional analyses of magmatic history. Using the calibrated CVD-R technique we have obtained mingling-to-eruption times for three explosive volcanic eruptions from Campi Flegrei (Italy) in the range of tens of minutes. These in turn imply ascent velocities of 5-8 meters per second. We anticipate the routine application of the CVD-R geochronometer to the eruptive products of active volcanoes in future in order to constrain typical “mixing to eruption” time lapses such that monitoring activities can be targeted at relevant timescales and signals during volcanic unrest. PMID:26387555

  5. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  6. Droplet size and velocity distributions for spray modelling

    NASA Astrophysics Data System (ADS)

    Jones, D. P.; Watkins, A. P.

    2012-01-01

    Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.

  7. Determinants of Sexual Network Structure and Their Impact on Cumulative Network Measures

    PubMed Central

    Schmid, Boris V.; Kretzschmar, Mirjam

    2012-01-01

    There are four major quantities that are measured in sexual behavior surveys that are thought to be especially relevant for the performance of sexual network models in terms of disease transmission. These are (i) the cumulative distribution of lifetime number of partners, (ii) the distribution of partnership durations, (iii) the distribution of gap lengths between partnerships, and (iv) the number of recent partners. Fitting a network model to these quantities as measured in sexual behavior surveys is expected to result in a good description of Chlamydia trachomatis transmission in terms of the heterogeneity of the distribution of infection in the population. Here we present a simulation model of a sexual contact network, in which we explored the role of behavioral heterogeneity of simulated individuals on the ability of the model to reproduce population-level sexual survey data from the Netherlands and UK. We find that a high level of heterogeneity in the ability of individuals to acquire and maintain (additional) partners strongly facilitates the ability of the model to accurately simulate the powerlaw-like distribution of the lifetime number of partners, and the age at which these partnerships were accumulated, as surveyed in actual sexual contact networks. Other sexual network features, such as the gap length between partnerships and the partnership duration, could–at the current level of detail of sexual survey data against which they were compared–be accurately modeled by a constant value (for transitional concurrency) and by exponential distributions (for partnership duration). Furthermore, we observe that epidemiological measures on disease prevalence in survey data can be used as a powerful tool for building accurate sexual contact networks, as these measures provide information on the level of mixing between individuals of different levels of sexual activity in the population, a parameter that is hard to acquire through surveying individuals. PMID:22570594

  8. Enhanced tunability of the composition in silicon oxynitride thin films by the reactive gas pulsing process

    NASA Astrophysics Data System (ADS)

    Aubry, Eric; Weber, Sylvain; Billard, Alain; Martin, Nicolas

    2014-01-01

    Silicon oxynitride thin films were sputter deposited by the reactive gas pulsing process. Pure silicon target was sputtered in Ar, N2 and O2 mixture atmosphere. Oxygen gas was periodically and solely introduced using exponential signals. In order to vary the injected O2 quantity in the deposition chamber during one pulse at constant injection time (TON), the tau mounting time τmou of the exponential signals was systematically changed for each deposition. Taking into account the real-time measurements of the discharge voltage and the I(O*)/I(Ar*) emission lines ratio, it is shown that the oscillations of the discharge voltage during the TON and TOFF times (injection of O2 stopped) are attributed to the preferential adsorption of the oxygen compared to that of the nitrogen. The sputtering mode alternates from a fully nitrided mode (TOFF time) to a mixed mode (nitrided and oxidized mode) during the TON time. For the highest injected O2 quantities, the mixed mode tends toward a fully oxidized mode due to an increase of the trapped oxygen on the target. The oxygen (nitrogen) concentration in the SiOxNy films similarly (inversely) varies as the oxygen is trapped. Moreover, measurements of the contamination speed of the Si target surface are connected to different behaviors of the process. At low injected O2 quantities, the nitrided mode predominates over the oxidized one during the TON time. It leads to the formation of Si3N4-yOy-like films. Inversely, the mixed mode takes place for high injected O2 quantities and the oxidized mode prevails against the nitrided one producing SiO2-xNx-like films.

  9. Exponential stabilization of magnetoelastic waves in a Mindlin-Timoshenko plate by localized internal damping

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-08-01

    This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.

  10. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  11. Why fast magnetic reconnection is so prevalent

    NASA Astrophysics Data System (ADS)

    Boozer, Allen H.

    2018-02-01

    Evolving magnetic fields are shown to generically reach a state of fast magnetic reconnection in which magnetic field line connections change and magnetic energy is released at an Alfvénic rate. This occurs even in plasmas with zero resistivity; only the finiteness of the mass of the lightest charged particle, an electron, is required. The speed and prevalence of Alfvénic or fast magnetic reconnection imply that its cause must be contained within the ideal evolution equation for magnetic fields, , where is the velocity of the magnetic field lines. For a generic , neighbouring magnetic field lines develop a separation that increases exponentially, as \\unicode[STIX]{x1D70E(\\ell ,t)}$ with the distance along a line. This exponentially enhances the sensitivity of the evolution to non-ideal effects. An analogous effect, the importance of stirring to produce a large-scale flow and enhance mixing, has been recognized by cooks through many millennia, but the importance of the large-scale flow to reconnection is customarily ignored. In part this is due to the sixty-year focus of recognition theory on two-coordinate models, which eliminate the exponential enhancement that is generic with three coordinates. A simple three-coordinate model is developed, which could be used to address many unanswered questions.

  12. Parabolic replicator dynamics and the principle of minimum Tsallis information gain

    PubMed Central

    2013-01-01

    Background Non-linear, parabolic (sub-exponential) and hyperbolic (super-exponential) models of prebiological evolution of molecular replicators have been proposed and extensively studied. The parabolic models appear to be the most realistic approximations of real-life replicator systems due primarily to product inhibition. Unlike the more traditional exponential models, the distribution of individual frequencies in an evolving parabolic population is not described by the Maximum Entropy (MaxEnt) Principle in its traditional form, whereby the distribution with the maximum Shannon entropy is chosen among all the distributions that are possible under the given constraints. We sought to identify a more general form of the MaxEnt principle that would be applicable to parabolic growth. Results We consider a model of a population that reproduces according to the parabolic growth law and show that the frequencies of individuals in the population minimize the Tsallis relative entropy (non-additive information gain) at each time moment. Next, we consider a model of a parabolically growing population that maintains a constant total size and provide an “implicit” solution for this system. We show that in this case, the frequencies of the individuals in the population also minimize the Tsallis information gain at each moment of the ‘internal time” of the population. Conclusions The results of this analysis show that the general MaxEnt principle is the underlying law for the evolution of a broad class of replicator systems including not only exponential but also parabolic and hyperbolic systems. The choice of the appropriate entropy (information) function depends on the growth dynamics of a particular class of systems. The Tsallis entropy is non-additive for independent subsystems, i.e. the information on the subsystems is insufficient to describe the system as a whole. In the context of prebiotic evolution, this “non-reductionist” nature of parabolic replicator systems might reflect the importance of group selection and competition between ensembles of cooperating replicators. Reviewers This article was reviewed by Viswanadham Sridhara (nominated by Claus Wilke), Puushottam Dixit (nominated by Sergei Maslov), and Nick Grishin. For the complete reviews, see the Reviewers’ Reports section. PMID:23937956

  13. Statistical independence of the initial conditions in chaotic mixing.

    PubMed

    García de la Cruz, J M; Vassilicos, J C; Rossi, L

    2017-11-01

    Experimental evidence of the scalar convergence towards a global strange eigenmode independent of the scalar initial condition in chaotic mixing is provided. This convergence, underpinning the independent nature of chaotic mixing in any passive scalar, is presented by scalar fields with different initial conditions casting statistically similar shapes when advected by periodic unsteady flows. As the scalar patterns converge towards a global strange eigenmode, the scalar filaments, locally aligned with the direction of maximum stretching, as described by the Lagrangian stretching theory, stack together in an inhomogeneous pattern at distances smaller than their asymptotic minimum widths. The scalar variance decay becomes then exponential and independent of the scalar diffusivity or initial condition. In this work, mixing is achieved by advecting the scalar using a set of laminar flows with unsteady periodic topology. These flows, that resemble the tendril-whorl map, are obtained by morphing the forcing geometry in an electromagnetic free surface 2D mixing experiment. This forcing generates a velocity field which periodically switches between two concentric hyperbolic and elliptic stagnation points. In agreement with previous literature, the velocity fields obtained produce a chaotic mixer with two regions: a central mixing and an external extensional area. These two regions are interconnected through two pairs of fluid conduits which transfer clean and dyed fluid from the extensional area towards the mixing region and a homogenized mixture from the mixing area towards the extensional region.

  14. Bimodal spatial distribution of macular pigment: evidence of a gender relationship

    NASA Astrophysics Data System (ADS)

    Delori, François C.; Goger, Douglas G.; Keilhauer, Claudia; Salvetti, Paola; Staurenghi, Giovanni

    2006-03-01

    The spatial distribution of the optical density of the human macular pigment measured by two-wavelength autofluorescence imaging exhibits in over half of the subjects an annulus of higher density superimposed on a central exponential-like distribution. This annulus is located at about 0.7° from the fovea. Women have broader distributions than men, and they are more likely to exhibit this bimodal distribution. Maxwell's spot reported by subjects matches the measured distribution of their pigment. Evidence that the shape of the foveal depression may be gender related leads us to hypothesize that differences in macular pigment distribution are related to anatomical differences in the shape of the foveal depression.

  15. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Bandi, Mahesh M.; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  16. Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations

    PubMed Central

    Good, Benjamin H.; Rouzine, Igor M.; Balick, Daniel J.; Hallatschek, Oskar; Desai, Michael M.

    2012-01-01

    When large asexual populations adapt, competition between simultaneously segregating mutations slows the rate of adaptation and restricts the set of mutations that eventually fix. This phenomenon of interference arises from competition between mutations of different strengths as well as competition between mutations that arise on different fitness backgrounds. Previous work has explored each of these effects in isolation, but the way they combine to influence the dynamics of adaptation remains largely unknown. Here, we describe a theoretical model to treat both aspects of interference in large populations. We calculate the rate of adaptation and the distribution of fixed mutational effects accumulated by the population. We focus particular attention on the case when the effects of beneficial mutations are exponentially distributed, as well as on a more general class of exponential-like distributions. In both cases, we show that the rate of adaptation and the influence of genetic background on the fixation of new mutants is equivalent to an effective model with a single selection coefficient and rescaled mutation rate, and we explicitly calculate these effective parameters. We find that the effective selection coefficient exactly coincides with the most common fixed mutational effect. This equivalence leads to an intuitive picture of the relative importance of different types of interference effects, which can shift dramatically as a function of the population size, mutation rate, and the underlying distribution of fitness effects. PMID:22371564

  17. Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence.

    PubMed

    Bandi, Mahesh M; Connaughton, Colm

    2008-03-01

    We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig's XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.

  18. Mathematical models to characterize early epidemic growth: A Review

    PubMed Central

    Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile

    2016-01-01

    There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-15 Ebola epidemic in West Africa. PMID:27451336

  19. Mathematical models to characterize early epidemic growth: A review

    NASA Astrophysics Data System (ADS)

    Chowell, Gerardo; Sattenspiel, Lisa; Bansal, Shweta; Viboud, Cécile

    2016-09-01

    There is a long tradition of using mathematical models to generate insights into the transmission dynamics of infectious diseases and assess the potential impact of different intervention strategies. The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing reliable models that capture the baseline transmission characteristics of specific pathogens and social contexts. More refined models are needed however, in particular to account for variation in the early growth dynamics of real epidemics and to gain a better understanding of the mechanisms at play. Here, we review recent progress on modeling and characterizing early epidemic growth patterns from infectious disease outbreak data, and survey the types of mathematical formulations that are most useful for capturing a diversity of early epidemic growth profiles, ranging from sub-exponential to exponential growth dynamics. Specifically, we review mathematical models that incorporate spatial details or realistic population mixing structures, including meta-population models, individual-based network models, and simple SIR-type models that incorporate the effects of reactive behavior changes or inhomogeneous mixing. In this process, we also analyze simulation data stemming from detailed large-scale agent-based models previously designed and calibrated to study how realistic social networks and disease transmission characteristics shape early epidemic growth patterns, general transmission dynamics, and control of international disease emergencies such as the 2009 A/H1N1 influenza pandemic and the 2014-2015 Ebola epidemic in West Africa.

  20. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    PubMed Central

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  1. Anomalous NMR relaxation in cartilage matrix components and native cartilage: Fractional-order models

    NASA Astrophysics Data System (ADS)

    Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-06-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.

  2. Mathematical Aspects of Reliability-Centered Maintenance

    DTIC Science & Technology

    1977-01-01

    exponential distribu~tion, .whose parameter (-hazard rate) can be realistically estimated., La ma SuWiaWItib~ This distribution is als4.. frequently...statistical methods to the study ýf hysicA3 reality was beset with .philosc\\phicsl problems arising from the irrefutable observacion that there isibut one...STATISTICS, 2nd ed. New York: John Wiley & Sons ; 1954. 5. Kolmogorov, A. Interpolation und Extrapolation von stationwren zuf-lligen Folgen. BULL. DE

  3. Context-Sensitive Detection of Local Community Structure

    DTIC Science & Technology

    2011-04-01

    characters in the Victor Hugo novel Les Miserables (lesmis).[77 vertices, 254 edges] [Knu93]. • The neural network of the nematode C. Elegans (c.elegans...adjectives and nouns in the Novel David Cop- perfield by Charles Dickens.[112 vertices, 425 edges] [New06]. • Les Miserables . Co-appearance network of...exponential distribution. The degree distributions of the Network Science, Les Miserables , and Word Adjacencies networks display a similar heavy tail. By

  4. Complexity and Productivity Differentiation Models of Metallogenic Indicator Elements in Rocks and Supergene Media Around Daijiazhuang Pb-Zn Deposit in Dangchang County, Gansu Province

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Jin-zhong, E-mail: viewsino@163.com; Yao, Shu-zhen; Zhang, Zhong-ping

    2013-03-15

    With the help of complexity indices, we quantitatively studied multifractals, frequency distributions, and linear and nonlinear characteristics of geochemical data for exploration of the Daijiazhuang Pb-Zn deposit. Furthermore, we derived productivity differentiation models of elements from thermodynamics and self-organized criticality of metallogenic systems. With respect to frequency distributions and multifractals, only Zn in rocks and most elements except Sb in secondary media, which had been derived mainly from weathering and alluviation, exhibit nonlinear distributions. The relations of productivity to concentrations of metallogenic elements and paragenic elements in rocks and those of elements strongly leached in secondary media can be seenmore » as linear addition of exponential functions with a characteristic weak chaos. The relations of associated elements such as Mo, Sb, and Hg in rocks and other elements in secondary media can be expressed as an exponential function, and the relations of one-phase self-organized geological or metallogenic processes can be represented by a power function, each representing secondary chaos or strong chaos. For secondary media, exploration data of most elements should be processed using nonlinear mathematical methods or should be transformed to linear distributions before processing using linear mathematical methods.« less

  5. Fluctuations in Wikipedia access-rate and edit-event data

    NASA Astrophysics Data System (ADS)

    Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev

    2012-12-01

    Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.

  6. Beyond Word Frequency: Bursts, Lulls, and Scaling in the Temporal Distributions of Words

    PubMed Central

    Altmann, Eduardo G.; Pierrehumbert, Janet B.; Motter, Adilson E.

    2009-01-01

    Background Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics. PMID:19907645

  7. Cluster-cluster aggregation with particle replication and chemotaxy: a simple model for the growth of animal cells in culture

    NASA Astrophysics Data System (ADS)

    Alves, S. G.; Martins, M. L.

    2010-09-01

    Aggregation of animal cells in culture comprises a series of motility, collision and adhesion processes of basic relevance for tissue engineering, bioseparations, oncology research and in vitro drug testing. In the present paper, a cluster-cluster aggregation model with stochastic particle replication and chemotactically driven motility is investigated as a model for the growth of animal cells in culture. The focus is on the scaling laws governing the aggregation kinetics. Our simulations reveal that in the absence of chemotaxy the mean cluster size and the total number of clusters scale in time as stretched exponentials dependent on the particle replication rate. Also, the dynamical cluster size distribution functions are represented by a scaling relation in which the scaling function involves a stretched exponential of the time. The introduction of chemoattraction among the particles leads to distribution functions decaying as power laws with exponents that decrease in time. The fractal dimensions and size distributions of the simulated clusters are qualitatively discussed in terms of those determined experimentally for several normal and tumoral cell lines growing in culture. It is shown that particle replication and chemotaxy account for the simplest cluster size distributions of cellular aggregates observed in culture.

  8. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  9. Bayesian analysis of the kinetics of quantal transmitter secretion at the neuromuscular junction.

    PubMed

    Saveliev, Anatoly; Khuzakhmetova, Venera; Samigullin, Dmitry; Skorinkin, Andrey; Kovyazina, Irina; Nikolsky, Eugeny; Bukharaeva, Ellya

    2015-10-01

    The timing of transmitter release from nerve endings is considered nowadays as one of the factors determining the plasticity and efficacy of synaptic transmission. In the neuromuscular junction, the moments of release of individual acetylcholine quanta are related to the synaptic delays of uniquantal endplate currents recorded under conditions of lowered extracellular calcium. Using Bayesian modelling, we performed a statistical analysis of synaptic delays in mouse neuromuscular junction with different patterns of rhythmic nerve stimulation and when the entry of calcium ions into the nerve terminal was modified. We have obtained a statistical model of the release timing which is represented as the summation of two independent statistical distributions. The first of these is the exponentially modified Gaussian distribution. The mixture of normal and exponential components in this distribution can be interpreted as a two-stage mechanism of early and late periods of phasic synchronous secretion. The parameters of this distribution depend on both the stimulation frequency of the motor nerve and the calcium ions' entry conditions. The second distribution was modelled as quasi-uniform, with parameters independent of nerve stimulation frequency and calcium entry. Two different probability density functions for the distribution of synaptic delays suggest at least two independent processes controlling the time course of secretion, one of them potentially involving two stages. The relative contribution of these processes to the total number of mediator quanta released depends differently on the motor nerve stimulation pattern and on calcium ion entry into nerve endings.

  10. Lévy flight with absorption: A model for diffusing diffusivity with long tails

    NASA Astrophysics Data System (ADS)

    Jain, Rohit; Sebastian, K. L.

    2017-03-01

    We consider diffusion of a particle in rearranging environment, so that the diffusivity of the particle is a stochastic function of time. In our previous model of "diffusing diffusivity" [Jain and Sebastian, J. Phys. Chem. B 120, 3988 (2016), 10.1021/acs.jpcb.6b01527], it was shown that the mean square displacement of particle remains Fickian, i.e., ∝T at all times, but the probability distribution of particle displacement is not Gaussian at all times. It is exponential at short times and crosses over to become Gaussian only in a large time limit in the case where the distribution of D in that model has a steady state limit which is exponential, i.e., πe(D ) ˜e-D /D0 . In the present study, we model the diffusivity of a particle as a Lévy flight process so that D has a power-law tailed distribution, viz., πe(D ) ˜D-1 -α with 0 <α <1 . We find that in the short time limit, the width of displacement distribution is proportional to √{T }, implying that the diffusion is Fickian. But for long times, the width is proportional to T1 /2 α which is a characteristic of anomalous diffusion. The distribution function for the displacement of the particle is found to be a symmetric stable distribution with a stability index 2 α which preserves its shape at all times.

  11. Investigation with an Interferometer of the Turbulent Mixing of a Free Supersonic Jet

    NASA Technical Reports Server (NTRS)

    Gooderum, Paul B; Wood, George P; Brevoort, Maurice J

    1950-01-01

    The free turbulent mixing of a supersonic jet of Mach number 1.6 has been experimentally investigated. An interferometer, of which a description is given, was used for the investigation. Density and velocity distributions through the mixing zone have been obtained. It was found that there was similarity in distribution at the cross sections investigated and that, in the subsonic portion of the mixing zone, the velocity distribution fitted the theoretical distribution for incompressible flow. It was found that the rates of spread of the mixing zone both into the jet and into the ambient air were less than those of subsonic jets.

  12. An Analysis of Freight Forwarder Operations in an International Distribution Channel.

    DTIC Science & Technology

    1987-01-01

    44 3. International Marketing Mix ....................... 45 4. Security Assistance Distribution Channel .......... 69 5...an item is ultimately derived from the interaction of variables in the marketing mix . Of those variables, the distribution functions seem to allow the...Component of the Marketing Mix ,"Proceedings, NCPDM Fall Meeting, National council of Physical Distribution Management, San Francisco, CA., 1982. 7

  13. Obstructive sleep apnea alters sleep stage transition dynamics.

    PubMed

    Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert

    2010-06-28

    Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.

  14. Stochastic processes in the social sciences: Markets, prices and wealth distributions

    NASA Astrophysics Data System (ADS)

    Romero, Natalia E.

    The present work uses statistical mechanics tools to investigate the dynamics of markets, prices, trades and wealth distribution. We studied the evolution of market dynamics in different stages of historical development by analyzing commodity prices from two distinct periods ancient Babylon, and medieval and early modern England. We find that the first-digit distributions of both Babylon and England commodity prices follow Benfords law, indicating that the data represent empirical observations typically arising from a free market. Further, we find that the normalized prices of both Babylon and England agricultural commodities are characterized by stretched exponential distributions, and exhibit persistent correlations of a power law type over long periods of up to several centuries, in contrast to contemporary markets. Our findings suggest that similar market interactions may underlie the dynamics of ancient agricultural commodity prices, and that these interactions may remain stable across centuries. To further investigate the dynamics of markets we present the analogy between transfers of money between individuals and the transfer of energy through particle collisions by means of the kinetic theory of gases. We introduce a theoretical framework of how the micro rules of trading lead to the emergence of income and wealth distribution. Particularly, we study the effects of different types of distribution of savings/investments among individuals in a society and different welfare/subsidies redistribution policies. Results show that while considering savings propensities the models approach empirical distributions of wealth quite well the effect of redistribution better captures specific features of the distributions which earlier models failed to do; moreover the models still preserve the exponential decay observed in empirical income distributions reported by tax data and surveys.

  15. Frequency Distribution of Seismic Intensity in Japan between 1950 and 2009

    NASA Astrophysics Data System (ADS)

    Kato, M.; Kohayakawa, Y.

    2012-12-01

    JMA Seismic Intensity is an index of seismic ground motion which is frequently used and reported in the media. While it is always difficult to represent complex ground motion with one index, the fact that it is widely accepted in the society makes the use of JMA Seismic Intensity preferable when seismologists communicate with the public and discuss hazard assessment and risk management. With the introduction on JMA Instrumental Intensity in 1996, the number of seismic intensity observation sites has substantially increased and the spatial coverage has improved vastly. Together with a long history of non-instrumental intensity records, the intensity data represent some aspects of the seismic ground motion in Japan. We investigate characteristics of seismic ground motion between 1950 and 2009 utilizing JMA Seismic Intensity Database. Specifically we are interested in the frequency distribution of intensity recordings. Observations of large intensity is rare compared to those of small intensity, and previous studies such as Ikegami [1961] demonstrated that frequency distribution of observed intensity obeys an exponential law, which is equivalent to the Ishimoto-Iida law [Ishimoto & Iida, 1939]. Such behavior could be used to empirically construct probabilistic seismic hazard maps [e.g., Kawasumi, 1951]. For the recent instrumental intensity data as well as pre-instrumental data, we are able to confirm that Ishimoto-Iida law explains the observation. Exponents of the Ishimoto-Iida law, or slope of the exponential law in the semi-log plot, is approximately 0.5. At stations with long recordings, there is no apparent difference between pre-instrumental and instrumental intensities when Ishimoto-Iida law is used as a measure. Numbers of average intensity reports per year and exponents of the frequency distribution curve vary regionally and local seismicity is apparently the controlling factor. The observed numbers of large intensity is slightly less than extrapolated and predicted from those of small intensity assuming the exponential relation.

  16. Wigner functions for evanescent waves.

    PubMed

    Petruccelli, Jonathan C; Tian, Lei; Oh, Se Baek; Barbastathis, George

    2012-09-01

    We propose phase space distributions, based on an extension of the Wigner distribution function, to describe fields of any state of coherence that contain evanescent components emitted into a half-space. The evanescent components of the field are described in an optical phase space of spatial position and complex-valued angle. Behavior of these distributions upon propagation is also considered, where the rapid decay of the evanescent components is associated with the exponential decay of the associated phase space distributions. To demonstrate the structure and behavior of these distributions, we consider the fields generated from total internal reflection of a Gaussian Schell-model beam at a planar interface.

  17. Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.

    PubMed

    Zhang, Yue; Berhane, Kiros

    2016-01-01

    We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.

  18. Avalanche Analysis from Multielectrode Ensemble Recordings in Cat, Monkey, and Human Cerebral Cortex during Wakefulness and Sleep

    PubMed Central

    Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain

    2012-01-01

    Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053

  19. Markov chains at the interface of combinatorics, computing, and statistical physics

    NASA Astrophysics Data System (ADS)

    Streib, Amanda Pascoe

    The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is always rapidly mixing for two general classes of positively biased { pxy}. More significantly, we also prove that the general conjecture is false by exhibiting values for the pxy, with 1/2 ≤ pxy ≤ 1 for all x < y, but for which the transposition chain will require exponential time to converge. Finally, we consider a model of colloids, which are binary mixtures of molecules with one type of molecule suspended in another. It is believed that at low density typical configurations will be well-mixed throughout, while at high density they will separate into clusters. This clustering has proved elusive to verify, since all local sampling algorithms are known to be inefficient at high density, and in fact a new nonlocal algorithm was recently shown to require exponential time in some cases. We characterize the high and low density phases for a general family of discrete interfering binary mixtures by showing that they exhibit a "clustering property" at high density and not at low density. The clustering property states that there will be a region that has very high area, very small perimeter, and high density of one type of molecule. Special cases of interfering binary mixtures include the Ising model at fixed magnetization and independent sets.

  20. Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.

    PubMed

    Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng

    2014-06-01

    Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.

  1. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity

    PubMed Central

    Englehardt, James D.

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  2. Comment on Pisarenko et al., "Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory"

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2016-02-01

    In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.

  3. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Power laws in citation distributions: evidence from Scopus.

    PubMed

    Brzezinski, Michal

    Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.

  5. Optimized open-flow mixing: insights from microbubble streaming

    NASA Astrophysics Data System (ADS)

    Rallabandi, Bhargav; Wang, Cheng; Guo, Lin; Hilgenfeldt, Sascha

    2015-11-01

    Microbubble streaming has been developed into a robust and powerful flow actuation technique in microfluidics. Here, we study it as a paradigmatic system for microfluidic mixing under a continuous throughput of fluid (open-flow mixing), providing a systematic optimization of the device parameters in this practically important situation. Focusing on two-dimensional advective stirring (neglecting diffusion), we show through numerical simulation and analytical theory that mixing in steady streaming vortices becomes ineffective beyond a characteristic time scale, necessitating the introduction of unsteadiness. By duty cycling the streaming, such unsteadiness is introduced in a controlled fashion, leading to exponential refinement of the advection structures. The rate of refinement is then optimized for particular parameters of the time modulation, i.e. a particular combination of times for which the streaming is turned ``on'' and ``off''. The optimized protocol can be understood theoretically using the properties of the streaming vortices and the throughput Poiseuille flow. We can thus infer simple design principles for practical open flow micromixing applications, consistent with experiments. Current Address: Mechanical and Aerospace Engineering, Princeton University.

  6. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    PubMed

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  7. Numerical study of MHD nanofluid flow and heat transfer past a bidirectional exponentially stretching sheet

    NASA Astrophysics Data System (ADS)

    Ahmad, Rida; Mustafa, M.; Hayat, T.; Alsaedi, A.

    2016-06-01

    Recent advancements in nanotechnology have led to the discovery of new generation coolants known as nanofluids. Nanofluids possess novel and unique characteristics which are fruitful in numerous cooling applications. Current work is undertaken to address the heat transfer in MHD three-dimensional flow of magnetic nanofluid (ferrofluid) over a bidirectional exponentially stretching sheet. The base fluid is considered as water which consists of magnetite-Fe3O4 nanoparticles. Exponentially varying surface temperature distribution is accounted. Problem formulation is presented through the Maxwell models for effective electrical conductivity and effective thermal conductivity of nanofluid. Similarity transformations give rise to a coupled non-linear differential system which is solved numerically. Appreciable growth in the convective heat transfer coefficient is observed when nanoparticle volume fraction is augmented. Temperature exponent parameter serves to enhance the heat transfer from the surface. Moreover the skin friction coefficient is directly proportional to both magnetic field strength and nanoparticle volume fraction.

  8. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  9. Single-channel activations and concentration jumps: comparison of recombinant NR1a/NR2A and NR1a/NR2D NMDA receptors

    PubMed Central

    Wyllie, David J A; Béhé, Philippe; Colquhoun, David

    1998-01-01

    We have expressed recombinant NR1a/NR2A and NR1a/NR2D N-methyl-D-aspartate (NMDA) receptor channels in Xenopus oocytes and made recordings of single-channel and macroscopic currents in outside-out membrane patches. For each receptor type we measured (a) the individual single-channel activations evoked by low glutamate concentrations in steady-state recordings, and (b) the macroscopic responses elicited by brief concentration jumps with high agonist concentrations, and we explore the relationship between these two sorts of observation. Low concentration (5–100 nM) steady-state recordings of NR1a/NR2A and NR1a/NR2D single-channel activity generated shut-time distributions that were best fitted with a mixture of five and six exponential components, respectively. Individual activations of either receptor type were resolved as bursts of openings, which we refer to as ‘super-clusters’. During a single activation, NR1a/NR2A receptors were open for 36 % of the time, but NR1a/NR2D receptors were open for only 4 % of the time. For both, distributions of super-cluster durations were best fitted with a mixture of six exponential components. Their overall mean durations were 35.8 and 1602 ms, respectively. Steady-state super-clusters were aligned on their first openings and averaged. The average was well fitted by a sum of exponentials with time constants taken from fits to super-cluster length distributions. It is shown that this is what would be expected for a channel that shows simple Markovian behaviour. The current through NR1a/NR2A channels following a concentration jump from zero to 1 mM glutamate for 1 ms was well fitted by three exponential components with time constants of 13 ms (rising phase), 70 ms and 350 ms (decaying phase). Similar concentration jumps on NR1a/NR2D channels were well fitted by two exponentials with means of 45 ms (rising phase) and 4408 ms (decaying phase) components. During prolonged exposure to glutamate, NR1a/NR2A channels desensitized with a time constant of 649 ms, while NR1a/NR2D channels exhibited no apparent desensitization. We show that under certain conditions, the time constants for the macroscopic jump response should be the same as those for the distribution of super-cluster lengths, though the resolution of the latter is so much greater that it cannot be expected that all the components will be resolvable in a macroscopic current. Good agreement was found for jumps on NR1a/NR2D receptors, and for some jump experiments on NR1a/NR2A. However, the latter were rather variable and some were slower than predicted. Slow decays were associated with patches that had large currents. PMID:9625862

  10. None of the Above

    ERIC Educational Resources Information Center

    Ray, Mark

    2013-01-01

    The exponential influx of digital content and mobile devices into schools begs for school librarians to engage in discussions and decision making about the selection, classification, management, and distribution of content ranging from e-books to open educational resources. As information professionals, school librarians should channel their inner…

  11. Inhomogeneous growth of fluctuations of concentration of inertial particles in channel turbulence

    NASA Astrophysics Data System (ADS)

    Fouxon, Itzhak; Schmidt, Lukas; Ditlevsen, Peter; van Reeuwijk, Maarten; Holzner, Markus

    2018-06-01

    We study the growth of concentration fluctuations of weakly inertial particles in the turbulent channel flow starting with a smooth initial distribution. The steady-state concentration is singular and multifractal so the growth describes the increasingly rugged structure of the distribution. We demonstrate that inhomogeneity influences the growth of concentration fluctuations profoundly. For homogeneous turbulence the growth is exponential and is fully determined by Kolmogorov scale eddies.We derive lognormality of the statistics in this case. The growth exponents of the moments are proportional to the sum of Lyapunov exponents, which is quadratic in the small inertia of the particles. In contrast, for inhomogeneous turbulence the growth is linear in inertia. It involves correlations of inertial range and viscous scale eddies that turn the growth into a stretched exponential law with exponent three halves. We demonstrate using direct numerical simulations that the resulting growth rate can differ by orders of magnitude over channel height. This strong variation might have relevance in the planetary boundary layer.

  12. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  13. The exponential rise of induced seismicity with increasing stress levels in the Groningen gas field and its implications for controlling seismic risk

    NASA Astrophysics Data System (ADS)

    Bourne, S. J.; Oates, S. J.; van Elk, J.

    2018-06-01

    Induced seismicity typically arises from the progressive activation of recently inactive geological faults by anthropogenic activity. Faults are mechanically and geometrically heterogeneous, so their extremes of stress and strength govern the initial evolution of induced seismicity. We derive a statistical model of Coulomb stress failures and associated aftershocks within the tail of the distribution of fault stress and strength variations to show initial induced seismicity rates will increase as an exponential function of induced stress. Our model provides operational forecasts consistent with the observed space-time-magnitude distribution of earthquakes induced by gas production from the Groningen field in the Netherlands. These probabilistic forecasts also match the observed changes in seismicity following a significant and sustained decrease in gas production rates designed to reduce seismic hazard and risk. This forecast capability allows reliable assessment of alternative control options to better inform future induced seismic risk management decisions.

  14. Controllable excitation of higher-order rogue waves in nonautonomous systems with both varying linear and harmonic external potentials

    NASA Astrophysics Data System (ADS)

    Jia, Heping; Yang, Rongcao; Tian, Jinping; Zhang, Wenmei

    2018-05-01

    The nonautonomous nonlinear Schrödinger (NLS) equation with both varying linear and harmonic external potentials is investigated and the semirational rogue wave (RW) solution is presented by similarity transformation. Based on the solution, the interactions between Peregrine soliton and breathers, and the controllability of the semirational RWs in periodic distribution and exponential decreasing nonautonomous systems with both linear and harmonic potentials are studied. It is found that the harmonic potential only influences the constraint condition of the semirational solution, the linear potential is related to the trajectory of the semirational RWs, while dispersion and nonlinearity determine the excitation position of the higher-order RWs. The higher-order RWs can be partly, completely and biperiodically excited in periodic distribution system and the diverse excited patterns can be generated for different parameter relations in exponential decreasing system. The results reveal that the excitation of the higher-order RWs can be controlled in the nonautonomous system by choosing dispersion, nonlinearity and external potentials.

  15. Comparison of Traditional and Open-Access Appointment Scheduling for Exponentially Distributed Service Time.

    PubMed

    Yan, Chongjun; Tang, Jiafu; Jiang, Bowen; Fung, Richard Y K

    2015-01-01

    This paper compares the performance measures of traditional appointment scheduling (AS) with those of an open-access appointment scheduling (OA-AS) system with exponentially distributed service time. A queueing model is formulated for the traditional AS system with no-show probability. The OA-AS models assume that all patients who call before the session begins will show up for the appointment on time. Two types of OA-AS systems are considered: with a same-session policy and with a same-or-next-session policy. Numerical results indicate that the superiority of OA-AS systems is not as obvious as those under deterministic scenarios. The same-session system has a threshold of relative waiting cost, after which the traditional system always has higher total costs, and the same-or-next-session system is always preferable, except when the no-show probability or the weight of patients' waiting is low. It is concluded that open-access policies can be viewed as alternative approaches to mitigate the negative effects of no-show patients.

  16. Traction forces during collective cell motion.

    PubMed

    Gov, N S

    2009-08-01

    Collective motion of cell cultures is a process of great interest, as it occurs during morphogenesis, wound healing, and tumor metastasis. During these processes cell cultures move due to the traction forces induced by the individual cells on the surrounding matrix. A recent study [Trepat, et al. (2009). Nat. Phys. 5, 426-430] measured for the first time the traction forces driving collective cell migration and found that they arise throughout the cell culture. The leading 5-10 rows of cell do play a major role in directing the motion of the rest of the culture by having a distinct outwards traction. Fluctuations in the traction forces are an order of magnitude larger than the resultant directional traction at the culture edge and, furthermore, have an exponential distribution. Such exponential distributions are observed for the sizes of adhesion domains within cells, the traction forces produced by single cells, and even in nonbiological nonequilibrium systems, such as sheared granular materials. We discuss these observations and their implications for our understanding of cellular flows within a continuous culture.

  17. Inclusive transverse momentum distributions of charged particles in diffractive and non-diffractive photoproduction at HERA

    NASA Astrophysics Data System (ADS)

    Derrick, M.; Krakauer, D.; Magill, S.; Mikunas, D.; Musgrave, B.; Repond, J.; Stanek, R.; Talaga, R. L.; Zhang, H.; Ayad, R.; Bari, G.; Basile, M.; Bellagamba, L.; Boscherini, D.; Bruni, A.; Bruni, G.; Bruni, P.; Romeo, G. Cara; Castellini, G.; Chiarini, M.; Cifarelli, L.; Cindolo, F.; Contin, A.; Corradi, M.; Gialas, I.; Giusti, P.; Iacobucci, G.; Laurenti, G.; Levi, G.; Margotti, A.; Massam, T.; Nania, R.; Nemoz, C.; Palmonari, F.; Polini, A.; Sartorelli, G.; Timellini, R.; Garcia, Y. Zamora; Zichichi, A.; Bargende, A.; Crittenden, J.; Desch, K.; Diekmann, B.; Doeker, T.; Eckert, M.; Feld, L.; Frey, A.; Geerts, M.; Geitz, G.; Grothe, M.; Haas, T.; Hartmann, H.; Haun, D.; Heinloth, K.; Hilger, E.; Jakob, H.-P.; Katz, U. F.; Mari, S. M.; Mass, A.; Mengel, S.; Mollen, J.; Paul, E.; Rembser, Ch.; Schattevoy, R.; Schramm, D.; Stamm, J.; Wedemeyer, R.; Campbell-Robson, S.; Cassidy, A.; Dyce, N.; Foster, B.; George, S.; Gilmore, R.; Heath, G. P.; Heath, H. F.; Llewellyn, T. J.; Morgado, C. J. S.; Norman, D. J. P.; O'Mara, J. A.; Tapper, R. J.; Wilson, S. S.; Yoshida, R.; Rau, R. R.; Arneodo, M.; Iannotti, L.; Schioppa, M.; Susinno, G.; Bernstein, A.; Caldwell, A.; Cartiglia, N.; Parsons, J. A.; Ritz, S.; Sciulli, F.; Straub, P. B.; Wai, L.; Yang, S.; Zhu, Q.; Borzemski, P.; Chwastowski, J.; Eskreys, A.; Piotrzkowski, K.; Zachara, M.; Zawiejski, L.; Adamczyk, L.; Bednarek, B.; Jeleń, K.; Kisielewska, D.; Kowalski, T.; Rulikowska-Zarębska, E.; Suszycki, L.; Zając, J.; Kotański, A.; Przybycień, M.; Bauerdick, L. A. T.; Behrens, U.; Beier, H.; Bienlein, J. K.; Coldewey, C.; Deppe, O.; Desler, K.; Drews, G.; Flasiński, M.; Gilkinson, D. J.; Glasman, C.; Göttlicher, P.; Große-Knetter, J.; Gutjahr, B.; Hain, W.; Hasell, D.; Heßling, H.; Iga, Y.; Joos, P.; Kasemann, M.; Klanner, R.; Koch, W.; Köpke, L.; Kötz, U.; Kowalski, H.; Labs, L.; Ladage, A.; Löhr, B.; Löwe, M.; Lüke, D.; Mańczak, O.; Monteiro, T.; Ng, J. S. T.; Nickel, S.; Notz, D.; Ohrenberg, K.; Roco, M.; Rohde, M.; Roldán, J.; Schneekloth, U.; Schulz, W.; Selonke, F.; Stiliaris, E.; Surrow, B.; Voß, T.; Westphal, D.; Wolf, G.; Youngman, C.; Zhou, J. F.; Grabosch, H. J.; Kharchilava, A.; Leich, A.; Mattingly, M. C. K.; Meyer, A.; Schlenstedt, S.; Wulff, N.; Barbagli, G.; Pelfer, P.; Anzivino, G.; Maccarrone, G.; de Pasquale, S.; Votano, L.; Bamberger, A.; Eisenhardt, S.; Freidhof, A.; Söldner-Rembold, S.; Schroeder, J.; Trefzger, T.; Brook, N. H.; Bussey, P. J.; Doyle, A. T.; Fleck, J. I.; Saxon, D. H.; Utley, M. L.; Wilson, A. S.; Dannemann, A.; Holm, U.; Horstmann, D.; Neumann, T.; Sinkus, R.; Wick, K.; Badura, E.; Burow, B. D.; Hagge, L.; Lohrmann, E.; Mainusch, J.; Milewski, J.; Nakahata, M.; Pavel, N.; Poelz, G.; Schott, W.; Zetsche, F.; Bacon, T. C.; Butterworth, I.; Gallo, E.; Harris, V. L.; Hung, B. Y. H.; Long, K. R.; Miller, D. B.; Morawitz, P. P. O.; Prinias, A.; Sedgbeer, J. K.; Whitfield, A. F.; Mallik, U.; McCliment, E.; Wang, M. Z.; Wang, S. M.; Wu, J. T.; Zhang, Y.; Cloth, P.; Filges, D.; An, S. H.; Hong, S. M.; Nam, S. W.; Park, S. K.; Suh, M. H.; Yon, S. H.; Imlay, R.; Kartik, S.; Kim, H.-J.; McNeil, R. R.; Metcalf, W.; Nadendla, V. K.; Barreiro, F.; Cases, G.; Graciani, R.; Hernández, J. M.; Hervás, L.; Labarga, L.; Del Peso, J.; Puga, J.; Terron, J.; de Trocóniz, J. F.; Smith, G. R.; Corriveau, F.; Hanna, D. S.; Hartmann, J.; Hung, L. W.; Lim, J. N.; Matthews, C. G.; Patel, P. M.; Sinclair, L. E.; Stairs, D. G.; St. Laurent, M.; Ullmann, R.; Zacek, G.; Bashkirov, V.; Dolgoshein, B. A.; Stifutkin, A.; Bashindzhagyan, G. L.; Ermolov, P. F.; Gladilin, L. K.; Golubkov, Y. A.; Kobrin, V. D.; Kuzmin, V. A.; Proskuryakov, A. S.; Savin, A. A.; Shcheglova, L. M.; Solomin, A. N.; Zotov, N. P.; Botje, M.; Chlebana, F.; Dake, A.; Engelen, J.; de Kamps, M.; Kooijman, P.; Kruse, A.; Tiecke, H.; Verkerke, W.; Vreeswijk, M.; Wiggers, L.; de Wolf, E.; van Woudenberg, R.; Acosta, D.; Bylsma, B.; Durkin, L. S.; Honscheid, K.; Li, C.; Ling, T. Y.; McLean, K. W.; Murray, W. N.; Park, I. H.; Romanowski, T. A.; Seidlein, R.; Bailey, D. S.; Blair, G. A.; Byrne, A.; Cashmore, R. J.; Cooper-Sarkar, A. M.; Daniels, D.; Devenish, R. C. E.; Harnew, N.; Lancaster, M.; Luffman, P. E.; Lindemann, L.; McFall, J. D.; Nath, C.; Noyes, V. A.; Quadt, A.; Uijterwaal, H.; Walczak, R.; Wilson, F. F.; Yip, T.; Abbiendi, G.; Bertolin, A.; Brugnera, R.; Carlin, R.; Dal Corso, F.; de Giorgi, M.; Dosselli, U.; Limentani, S.; Morandin, M.; Posocco, M.; Stanco, L.; Stroili, R.; Voci, C.; Bulmahn, J.; Butterworth, J. M.; Feild, R. G.; Oh, B. Y.; Whitmore, J. J.; D'Agostini, G.; Marini, G.; Nigro, A.; Tassi, E.; Hart, J. C.; McCubbin, N. A.; Prytz, K.; Shah, T. P.; Short, T. L.; Barberis, E.; Dubbs, T.; Heusch, C.; van Hook, M.; Hubbard, B.; Lockman, W.; Rahn, J. T.; Sadrozinski, H. F.-W.; Seiden, A.; Biltzinger, J.; Seifert, R. J.; Schwarzer, O.; Walenta, A. H.; Zech, G.; Abramowicz, H.; Briskin, G.; Dagan, S.; Levy, A.; Hasegawa, T.; Hazumi, M.; Ishii, T.; Kuze, M.; Mine, S.; Nagasawa, Y.; Nakao, M.; Suzuki, I.; Tokushuku, K.; Yamada, S.; Yamazaki, Y.; Chiba, M.; Hamatsu, R.; Hirose, T.; Homma, K.; Kitamura, S.; Nakamitsu, Y.; Yamauchi, K.; Cirio, R.; Costa, M.; Ferrero, M. I.; Lamberti, L.; Maselli, S.; Peroni, C.; Sacchi, R.; Solano, A.; Staiano, A.; Dardo, M.; Bailey, D. C.; Bandyopadhyay, D.; Benard, F.; Brkic, M.; Crombie, M. B.; Gingrich, D. M.; Hartner, G. F.; Joo, K. K.; Levman, G. M.; Martin, J. F.; Orr, R. S.; Sampson, C. R.; Teuscher, R. J.; Catterall, C. D.; Jones, T. W.; Kaziewicz, P. B.; Lane, J. B.; Saunders, R. L.; Shulman, J.; Blankenship, K.; Lu, B.; Mo, L. W.; Bogusz, W.; Charchula, K.; Ciborowski, J.; Gajewski, J.; Grzelak, G.; Kasprzak, M.; Krzyżanowski, M.; Muchorowski, K.; Nowak, R. J.; Pawlak, J. M.; Tymieniecka, T.; Wróblewski, A. K.; Zakrzewski, J. A.; Żarnecki, A. F.; Adamus, M.; Eisenberg, Y.; Karshon, U.; Revel, D.; Zer-Zion, D.; Ali, I.; Badgett, W. F.; Behrens, B.; Dasu, S.; Fordham, C.; Foudas, C.; Goussiou, A.; Loveless, R. J.; Reeder, D. D.; Silverstein, S.; Smith, W. H.; Vaiciulis, A.; Wodarczyk, M.; Tsurugai, T.; Bhadra, S.; Cardy, M. L.; Fagerstroem, C.-P.; Frisken, W. R.; Furutani, K. M.; Khakzad, M.; Schmidke, W. B.

    1995-06-01

    Inclusive transverse momentum spectra of charged particles in photoproduction events in the laboratory pseudorapidity range -1.2<η<1.4 have been measured up to p T =8 GeV usign the ZEUS detector. Diffractive and non-diffractive reactions have been selected with an average γ p centre of mass (c.m.) energy of < W>=180 GeV. For diffractive reactions, the p T spectra of the photon dissociation events have been measured in two intervals of the dissociated photon mass with mean values < M X >=5 GeV and 10 GeV. The inclusive transverse momentum spectra fall exponentially in the low p T region. The non-diffractive data show a pronounced high p T tail departing from the exponential shape. The p T distributions are compared to lower energy photoproduction data and to hadron-hadron collisions at a similar c.m. energy. The data are also compared to the results of a next-to-leading order QCD calculation.

  18. Work fluctuations for Bose particles in grand canonical initial states.

    PubMed

    Yi, Juyeon; Kim, Yong Woon; Talkner, Peter

    2012-05-01

    We consider bosons in a harmonic trap and investigate the fluctuations of the work performed by an adiabatic change of the trap curvature. Depending on the reservoir conditions such as temperature and chemical potential that provide the initial equilibrium state, the exponentiated work average (EWA) defined in the context of the Crooks relation and the Jarzynski equality may diverge if the trap becomes wider. We investigate how the probability distribution function (PDF) of the work signals this divergence. It is shown that at low temperatures the PDF is highly asymmetric with a steep fall-off at one side and an exponential tail at the other side. For high temperatures it is closer to a symmetric distribution approaching a Gaussian form. These properties of the work PDF are discussed in relation to the convergence of the EWA and to the existence of the hypothetical equilibrium state to which those thermodynamic potential changes refer that enter both the Crooks relation and the Jarzynski equality.

  19. A computer program for thermal radiation from gaseous rocket exhuast plumes (GASRAD)

    NASA Technical Reports Server (NTRS)

    Reardon, J. E.; Lee, Y. C.

    1979-01-01

    A computer code is presented for predicting incident thermal radiation from defined plume gas properties in either axisymmetric or cylindrical coordinate systems. The radiation model is a statistical band model for exponential line strength distribution with Lorentz/Doppler line shapes for 5 gaseous species (H2O, CO2, CO, HCl and HF) and an appoximate (non-scattering) treatment of carbon particles. The Curtis-Godson approximation is used for inhomogeneous gases, but a subroutine is available for using Young's intuitive derivative method for H2O with Lorentz line shape and exponentially-tailed-inverse line strength distribution. The geometry model provides integration over a hemisphere with up to 6 individually oriented identical axisymmetric plumes, a single 3-D plume, Shading surfaces may be used in any of 7 shapes, and a conical limit may be defined for the plume to set individual line-of-signt limits. Intermediate coordinate systems may specified to simplify input of plumes and shading surfaces.

  20. The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees

    NASA Astrophysics Data System (ADS)

    Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard

    2017-09-01

    Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.

  1. Statistics of Optical Coherence Tomography Data From Human Retina

    PubMed Central

    de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo

    2010-01-01

    Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733

  2. Goodness of fit of probability distributions for sightings as species approach extinction.

    PubMed

    Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael

    2009-04-01

    Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.

  3. The perturbed Sparre Andersen model with a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Gao, Heli; Yin, Chuancun

    2008-10-01

    In this paper, we consider a Sparre Andersen model perturbed by diffusion with generalized Erlang(n)-distributed inter-claim times and a threshold dividend strategy. Integro-differential equations with certain boundary conditions for the moment-generation function and the mth moment of the present value of all dividends until ruin are derived. We also derive integro-differential equations with boundary conditions for the Gerber-Shiu functions. The special case where the inter-claim times are Erlang(2) distributed and the claim size distribution is exponential is considered in some details.

  4. Measuring helium bubble diameter distributions in tungsten with grazing incidence small angle x-ray scattering (GISAXS)

    NASA Astrophysics Data System (ADS)

    Thompson, M.; Kluth, P.; Doerner, R. P.; Kirby, N.; Riley, D.; Corr, C. S.

    2016-02-01

    Grazing incidence small angle x-ray scattering was performed on tungsten samples exposed to helium plasma in the MAGPIE and Pisces-A linear plasma devices to measure the size distributions of resulting helium nano-bubbles. Nano-bubbles were fitted assuming spheroidal particles and an exponential diameter distribution. These particles had mean diameters between 0.36 and 0.62 nm. Pisces-A exposed samples showed more complex patterns, which may suggest the formation of faceted nano-bubbles or nano-scale surface structures.

  5. Time-Frequency Signal Representations Using Interpolations in Joint-Variable Domains

    DTIC Science & Technology

    2016-06-14

    distribution kernels,” IEEE Trans. Signal Process., vol. 42, no. 5, pp. 1156–1165, May 1994. [25] G. S. Cunningham and W. J. Williams , “Kernel...interpolated data. For comparison, we include sparse reconstruction and WVD and Choi– Williams distribution (CWD) [23], which are directly applied to...Prentice-Hall, 1995. [23] H. I. Choi and W. J. Williams , “Improved time-frequency representa- tion of multicomponent signals using exponential kernels

  6. A Random Walk Picture of Basketball

    NASA Astrophysics Data System (ADS)

    Gabel, Alan; Redner, Sidney

    2012-02-01

    We analyze NBA basketball play-by-play data and found that scoring is well described by a weakly-biased, anti-persistent, continuous-time random walk. The time between successive scoring events follows an exponential distribution, with little memory between events. We account for a wide variety of statistical properties of scoring, such as the distribution of the score difference between opponents and the fraction of game time that one team is in the lead.

  7. Size-Frequency Distributions of Rocks on Mars and Earth Analog Sites: Implications for Future Landed Missions

    NASA Technical Reports Server (NTRS)

    Golombeck, M.; Rapp, D.

    1996-01-01

    The size-frequency distribution of rocks and the Vicking landing sites and a variety of rocky locations on the Earth that formed from a number of geologic processes all have the general shape of simple exponential curves, which have been combined with remote sensing data and models on rock abundance to predict the frequency of boulders potentially hazardous to future Mars landers and rovers.

  8. Water resources in the next millennium

    NASA Astrophysics Data System (ADS)

    Wood, Warren

    As pressures from an exponentially increasing population and economic expectations rise against a finite water resource, how do we address management? This was the main focus of the Dubai International Conference on Water Resources and Integrated Management in the Third Millennium in Dubai, United Arab Emirates, 2-6 February 2002. The invited forum attracted an eclectic mix of international thinkers from five continents. Presentations and discussions on hydrology policy/property rights, and management strategies focused mainly on problems of water supply, irrigation, and/or ecosystems.

  9. A connection between mix and adiabat in ICF capsules

    NASA Astrophysics Data System (ADS)

    Cheng, Baolian; Kwan, Thomas; Wang, Yi-Ming; Yi, Sunghuan (Austin); Batha, Steven

    2016-10-01

    We study the relationship between instability induced mix, preheat and the adiabat of the deuterium-tritium (DT) fuel in fusion capsule experiments. Our studies show that hydrodynamic instability not only directly affects the implosion, hot spot shape and mix, but also affects the thermodynamics of the capsule, such as, the adiabat of the DT fuel, and, in turn, affects the energy partition between the pusher shell (cold DT) and the hot spot. It was found that the adiabat of the DT fuel is sensitive to the amount of mix caused by Richtmyer-Meshkov (RM) and Rayleigh-Taylor (RT) instabilities at the material interfaces due to its exponential dependence on the fuel entropy. An upper limit of mix allowed maintaining a low adiabat of DT fuel is derived. Additionally we demonstrated that the use of a high adiabat for the DT fuel in theoretical analysis and with the aid of 1D code simulations could explain some aspects of the 3D effects and mix in the capsule experiments. Furthermore, from the observed neutron images and our physics model, we could infer the adiabat of the DT fuel in the capsule and determine the possible amount of mix in the hot spot (LA-UR-16-24880). This work was conducted under the auspices of the U.S. Department of Energy by the Los Alamos National Laboratory under Contract No. W-7405-ENG-36.

  10. Similarity solutions for unsteady flow behind an exponential shock in a self-gravitating non-ideal gas with azimuthal magnetic field

    NASA Astrophysics Data System (ADS)

    Nath, G.; Pathak, R. P.; Dutta, Mrityunjoy

    2018-01-01

    Similarity solutions for the flow of a non-ideal gas behind a strong exponential shock driven out by a piston (cylindrical or spherical) moving with time according to an exponential law is obtained. Solutions are obtained, in both the cases, when the flow between the shock and the piston is isothermal or adiabatic. The shock wave is driven by a piston moving with time according to an exponential law. Similarity solutions exist only when the surrounding medium is of constant density. The effects of variation of ambient magnetic field, non-idealness of the gas, adiabatic exponent and gravitational parameter are worked out in detail. It is shown that the increase in the non-idealness of the gas or the adiabatic exponent of the gas or presence of magnetic field have decaying effect on the shock wave. Consideration of the isothermal flow and the self-gravitational field increase the shock strength. Also, the consideration of isothermal flow or the presence of magnetic field removes the singularity in the density distribution, which arises in the case of adiabatic flow. The result of our study may be used to interpret measurements carried out by space craft in the solar wind and in neighborhood of the Earth's magnetosphere.

  11. Mathematical Modeling of Extinction of Inhomogeneous Populations

    PubMed Central

    Karev, G.P.; Kareva, I.

    2016-01-01

    Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117

  12. Non-Gaussian analysis of diffusion weighted imaging in head and neck at 3T: a pilot study in patients with nasopharyngeal carcinoma.

    PubMed

    Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D

    2014-01-01

    To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.

  13. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    PubMed

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  14. Tailpulse signal generator

    DOEpatents

    Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA

    2009-06-23

    A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.

  15. Exponential Communication Complexity Advantage from Quantum Superposition of the Direction of Communication

    NASA Astrophysics Data System (ADS)

    Guérin, Philippe Allard; Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-09-01

    In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

  16. Deformed exponentials and portfolio selection

    NASA Astrophysics Data System (ADS)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  17. Effect of particle size on mixing degree in dispensation.

    PubMed

    Nakamura, Hitoshi; Yanagihara, Yoshitsugu; Sekiguchi, Hiroko; Ohtani, Michiteru; Kariya, Satoru; Uchino, Katsuyoshi; Suzuki, Hiroshi; Iga, Tatsuji

    2004-03-01

    By using lactose colored with erythrocin, we examined the effect of particle size on mixing degree during the preparation of triturations with a mortar and pestle. We used powders with different distributions of particle sizes, i.e., powder that passed through 32-mesh but was trapped on a 42-mesh sieve (32/42-mesh powder), powder that passed through a 42-mesh sieve but was trapped on a 60-mesh sieve (42/60-mesh powder), powder that passed through a 60-mesh sieve but was trapped on a 100-mesh sieve (60/100-mesh powder), and powder that passes through a 100-mesh sieve (> 100-mesh powder). The mixing degree of colored powder and non-colored powder whose distribution of particle sizes was the same as that of the colored powder was excellent. The coefficient of variation (CV) value of the mixing degree was 6.08% after 40 rotations when colored powder was mixed with non-colored powder that both passed through a 100-mesh sieve. The CV value of the mixing degree was low in the case of mixing of colored and non-colored powders with different particle size distributions. After mixing, about 50% of 42/60-mesh powder had become smaller particles, whereas the distribution of particle sizes was not influenced by the mixing of 60/100-mesh powder. It was suggested that the mixing degree is affected by distribution of particle sizes. It may be important to determine the mixing degrees for drugs with narrow therapeutic ranges.

  18. Mixing at double-Tee junctions with unequal pipe sizes in water distribution systems

    EPA Science Inventory

    Pipe flow mixing with various solute concentrations and flow rates at pipe junctions is investigated. The degree of mixing affects the spread of contaminants in a water distribution system. Many studies have been conducted on the mixing at the cross junctions. Yet a few have focu...

  19. Long time stability of small-amplitude Breathers in a mixed FPU-KG model

    NASA Astrophysics Data System (ADS)

    Paleari, Simone; Penati, Tiziano

    2016-12-01

    In the limit of small couplings in the nearest neighbor interaction, and small total energy, we apply the resonant normal form result of a previous paper of ours to a finite but arbitrarily large mixed Fermi-Pasta-Ulam Klein-Gordon chain, i.e., with both linear and nonlinear terms in both the on-site and interaction potential, with periodic boundary conditions. An existence and orbital stability result for Breathers of such a normal form, which turns out to be a generalized discrete nonlinear Schrödinger model with exponentially decaying all neighbor interactions, is first proved. Exploiting such a result as an intermediate step, a long time stability theorem for the true Breathers of the KG and FPU-KG models, in the anti-continuous limit, is proven.

  20. TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.

    DTIC Science & Technology

    exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization

Top