On the minimum of independent geometrically distributed random variables
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
Lohmann, W
1978-01-01
The shape of the survivorship curve can easily be interpreted on condition that the probability of death is proportional to an exponentially rising function of ageing. According to the formation of a sum for determining of the age index by Ries it was investigated to what extent the survivorship curve may be approximated by a sum of exponentials. It follows that the difference between the pure exponential function and a sum of exponentials by using possible values is lying within the random variation. Because the probability of death for different diseases is variable, the new statement is a better one.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Perturbed effects at radiation physics
NASA Astrophysics Data System (ADS)
Külahcı, Fatih; Şen, Zekâi
2013-09-01
Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
Zhang, Renduo; Wood, A Lynn; Enfield, Carl G; Jeong, Seung-Woo
2003-01-01
Stochastical analysis was performed to assess the effect of soil spatial variability and heterogeneity on the recovery of denser-than-water nonaqueous phase liquids (DNAPL) during the process of surfactant-enhanced remediation. UTCHEM, a three-dimensional, multicomponent, multiphase, compositional model, was used to simulate water flow and chemical transport processes in heterogeneous soils. Soil spatial variability and heterogeneity were accounted for by considering the soil permeability as a spatial random variable and a geostatistical method was used to generate random distributions of the permeability. The randomly generated permeability fields were incorporated into UTCHEM to simulate DNAPL transport in heterogeneous media and stochastical analysis was conducted based on the simulated results. From the analysis, an exponential relationship between average DNAPL recovery and soil heterogeneity (defined as the standard deviation of log of permeability) was established with a coefficient of determination (r2) of 0.991, which indicated that DNAPL recovery decreased exponentially with increasing soil heterogeneity. Temporal and spatial distributions of relative saturations in the water phase, DNAPL, and microemulsion in heterogeneous soils were compared with those in homogeneous soils and related to soil heterogeneity. Cleanup time and uncertainty to determine DNAPL distributions in heterogeneous soils were also quantified. The study would provide useful information to design strategies for the characterization and remediation of nonaqueous phase liquid-contaminated soils with spatial variability and heterogeneity.
Shuttle program: Ground tracking data program document shuttle OFT launch/landing
NASA Technical Reports Server (NTRS)
Lear, W. M.
1977-01-01
The equations for processing ground tracking data during a space shuttle ascent or entry, or any nonfree flight phase of a shuttle mission are given. The resulting computer program processes data from up to three stations simultaneously: C-band station number 1; C-band station number 2; and an S-band station. The C-band data consists of range, azimuth, and elevation angle measurements. The S-band data consists of range, two angles, and integrated Doppler data in the form of cycle counts. A nineteen element state vector is used in Kalman filter to process the measurements. The acceleration components of the shuttle are taken to be independent exponentially-correlated random variables. Nine elements of the state vector are the measurement bias errors associated with range and two angles for each tracking station. The biases are all modeled as exponentially-correlated random variables with a typical time constant of 108 seconds. All time constants are taken to be the same for all nine state variables. This simplifies the logic in propagating the state error covariance matrix ahead in time.
Two-time scale subordination in physical processes with long-term memory
NASA Astrophysics Data System (ADS)
Stanislavsky, Aleksander; Weron, Karina
2008-03-01
We describe dynamical processes in continuous media with a long-term memory. Our consideration is based on a stochastic subordination idea and concerns two physical examples in detail. First we study a temporal evolution of the species concentration in a trapping reaction in which a diffusing reactant is surrounded by a sea of randomly moving traps. The analysis uses the random-variable formalism of anomalous diffusive processes. We find that the empirical trapping-reaction law, according to which the reactant concentration decreases in time as a product of an exponential and a stretched exponential function, can be explained by a two-time scale subordination of random processes. Another example is connected with a state equation for continuous media with memory. If the pressure and the density of a medium are subordinated in two different random processes, then the ordinary state equation becomes fractional with two-time scales. This allows one to arrive at the Bagley-Torvik type of state equation.
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
1987-09-01
inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in
Epidemics in networks: a master equation approach
NASA Astrophysics Data System (ADS)
Cotacallapa, M.; Hase, M. O.
2016-02-01
A problem closely related to epidemiology, where a subgraph of ‘infected’ links is defined inside a larger network, is investigated. This subgraph is generated from the underlying network by a random variable, which decides whether a link is able to propagate a disease/information. The relaxation timescale of this random variable is examined in both annealed and quenched limits, and the effectiveness of propagation of disease/information is analyzed. The dynamics of the model is governed by a master equation and two types of underlying network are considered: one is scale-free and the other has exponential degree distribution. We have shown that the relaxation timescale of the contagion variable has a major influence on the topology of the subgraph of infected links, which determines the efficiency of spreading of disease/information over the network.
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
Constraining estimates of global soil respiration by quantifying sources of variability.
Jian, Jinshi; Steele, Meredith K; Thomas, R Quinn; Day, Susan D; Hodges, Steven C
2018-05-10
Quantifying global soil respiration (R SG ) and its response to temperature change are critical for predicting the turnover of terrestrial carbon stocks and their feedbacks to climate change. Currently, estimates of R SG range from 68 to 98 Pg C year -1 , causing considerable uncertainty in the global carbon budget. We argue the source of this variability lies in the upscaling assumptions regarding the model format, data timescales, and precipitation component. To quantify the variability and constrain R SG , we developed R SG models using Random Forest and exponential models, and used different timescales (daily, monthly, and annual) of soil respiration (R S ) and climate data to predict R SG . From the resulting R SG estimates (range = 66.62-100.72 Pg), we calculated variability associated with each assumption. Among model formats, using monthly R S data rather than annual data decreased R SG by 7.43-9.46 Pg; however, R SG calculated from daily R S data was only 1.83 Pg lower than the R SG from monthly data. Using mean annual precipitation and temperature data instead of monthly data caused +4.84 and -4.36 Pg C differences, respectively. If the timescale of R S data is constant, R SG estimated by the first-order exponential (93.2 Pg) was greater than the Random Forest (78.76 Pg) or second-order exponential (76.18 Pg) estimates. These results highlight the importance of variation at subannual timescales for upscaling to R SG . The results indicated R SG is lower than in recent papers and the current benchmark for land models (98 Pg C year -1 ), and thus may change the predicted rates of terrestrial carbon turnover and the carbon to climate feedback as global temperatures rise. © 2018 John Wiley & Sons Ltd.
Redshift data and statistical inference
NASA Technical Reports Server (NTRS)
Newman, William I.; Haynes, Martha P.; Terzian, Yervant
1994-01-01
Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
Compiling probabilistic, bio-inspired circuits on a field programmable analog array
Marr, Bo; Hasler, Jennifer
2014-01-01
A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199
Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo
2008-04-25
A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
2007-03-01
Quadrature QPSK Quadrature Phase-Shift Keying RV Random Variable SHAC Single-Hop-Observation Auto- Correlation SINR Signal-to-Interference...The fast Fourier transform ( FFT ) accumulation method and the strip spectral correlation algorithm subdivide the support region in the bi-frequency...diamond shapes, while the strip spectral correlation algorithm subdivides the region into strips. Each strip covers a number of the FFT accumulation
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
ERIC Educational Resources Information Center
Huynh, Huynh
By noting that a Rasch or two parameter logistic (2PL) item belongs to the exponential family of random variables and that the probability density function (pdf) of the correct response (X=1) and the incorrect response (X=0) are symmetric with respect to the vertical line at the item location, it is shown that the conjugate prior for ability is…
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
Quantum Adiabatic Optimization and Combinatorial Landscapes
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Knysh, S.; Morris, R. D.
2003-01-01
In this paper we analyze the performance of the Quantum Adiabatic Evolution (QAE) algorithm on a variant of Satisfiability problem for an ensemble of random graphs parametrized by the ratio of clauses to variables, gamma = M / N. We introduce a set of macroscopic parameters (landscapes) and put forward an ansatz of universality for random bit flips. We then formulate the problem of finding the smallest eigenvalue and the excitation gap as a statistical mechanics problem. We use the so-called annealing approximation with a refinement that a finite set of macroscopic variables (verses only energy) is used, and are able to show the existence of a dynamic threshold gamma = gammad, beyond which QAE should take an exponentially long time to find a solution. We compare the results for extended and simplified sets of landscapes and provide numerical evidence in support of our universality ansatz.
Statistical theory of nucleation in the presence of uncharacterized impurities
NASA Astrophysics Data System (ADS)
Sear, Richard P.
2004-08-01
First order phase transitions proceed via nucleation. The rate of nucleation varies exponentially with the free-energy barrier to nucleation, and so is highly sensitive to variations in this barrier. In practice, very few systems are absolutely pure, there are typically some impurities present which are rather poorly characterized. These interact with the nucleus, causing the barrier to vary, and so must be taken into account. Here the impurity-nucleus interactions are modelled by random variables. The rate then has the same form as the partition function of Derrida’s random energy model, and as in this model there is a regime in which the behavior is non-self-averaging. Non-self-averaging nucleation is nucleation with a rate that varies significantly from one realization of the random variables to another. In experiment this corresponds to variation in the nucleation rate from one sample to another. General analytic expressions are obtained for the crossover from a self-averaging to a non-self-averaging rate of nucleation.
Multivariate Analysis and Its Applications
1989-02-14
defined in situations where measurements are taken on natural clusters of individuals like brothers in a family. A number of problems arise in the study of...intraclass correlations. How do we estimate it when observations are available on clusters of different sizes? How do we test the hypothesis that the...the random variable y(X) = #I X + G2X 2 + ... + GmX m , follows an exponential distribution with mean unity. Such a class of life distributions, has a
Enhancing Multimedia Imbalanced Concept Detection Using VIMP in Random Forests.
Sadiq, Saad; Yan, Yilin; Shyu, Mei-Ling; Chen, Shu-Ching; Ishwaran, Hemant
2016-07-01
Recent developments in social media and cloud storage lead to an exponential growth in the amount of multimedia data, which increases the complexity of managing, storing, indexing, and retrieving information from such big data. Many current content-based concept detection approaches lag from successfully bridging the semantic gap. To solve this problem, a multi-stage random forest framework is proposed to generate predictor variables based on multivariate regressions using variable importance (VIMP). By fine tuning the forests and significantly reducing the predictor variables, the concept detection scores are evaluated when the concept of interest is rare and imbalanced, i.e., having little collaboration with other high level concepts. Using classical multivariate statistics, estimating the value of one coordinate using other coordinates standardizes the covariates and it depends upon the variance of the correlations instead of the mean. Thus, conditional dependence on the data being normally distributed is eliminated. Experimental results demonstrate that the proposed framework outperforms those approaches in the comparison in terms of the Mean Average Precision (MAP) values.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
Variable step random walks, self-similar distributions, and pricing of options (Invited Paper)
NASA Astrophysics Data System (ADS)
Gunaratne, Gemunu H.; McCauley, Joseph L.
2005-05-01
A new theory for pricing of options is presented. It is based on the assumption that successive movements depend on the value of the return. The solution to the Fokker-Planck equation is shown to be an asymmetric exponential distribution, similar to those observed in intra-day currency markets. The "volatility smile", used by traders to correct the Black-Scholes pricing is shown to be a heuristic mechanism to implement options pricing formulae derived from our theory.
Estimation of gloss from rough surface parameters
NASA Astrophysics Data System (ADS)
Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin
2005-12-01
Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.
Decay of random correlation functions for unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique
2000-10-01
Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.
Long period pseudo random number sequence generator
NASA Technical Reports Server (NTRS)
Wang, Charles C. (Inventor)
1989-01-01
A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
1980-06-01
70. AWST RC 7 Coeittu an rewwase ati of nee*aa.ean mimDdentify by black n,.mboJ T two-sample version of the Cram~ r -von Mines statistic for right...estimator for exponential distributions. KEY WORDS: Cram~ r -von Mtses distance; Kaplan-Meier estimators; Right censorship; Scale parameter; lodgea and...suppose that two positive random variables ’i 2 S0 and ’ r differ in distribution only by their scale parameters. That is, there exists a positive
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
An exponential decay model for mediation.
Fritz, Matthew S
2014-10-01
Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, address many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed.
An Exponential Decay Model for Mediation
Fritz, Matthew S.
2013-01-01
Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, addresses many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed. PMID:23625557
Juras, Vladimir; Apprich, Sebastian; Szomolanyi, Pavol; Bieri, Oliver; Deligianni, Xeni; Trattnig, Siegfried
2013-10-01
To compare mono- and bi-exponential T2 analysis in healthy and degenerated Achilles tendons using a recently introduced magnetic resonance variable-echo-time sequence (vTE) for T2 mapping. Ten volunteers and ten patients were included in the study. A variable-echo-time sequence was used with 20 echo times. Images were post-processed with both techniques, mono- and bi-exponential [T2 m, short T2 component (T2 s) and long T2 component (T2 l)]. The number of mono- and bi-exponentially decaying pixels in each region of interest was expressed as a ratio (B/M). Patients were clinically assessed with the Achilles Tendon Rupture Score (ATRS), and these values were correlated with the T2 values. The means for both T2 m and T2 s were statistically significantly different between patients and volunteers; however, for T2 s, the P value was lower. In patients, the Pearson correlation coefficient between ATRS and T2 s was -0.816 (P = 0.007). The proposed variable-echo-time sequence can be successfully used as an alternative method to UTE sequences with some added benefits, such as a short imaging time along with relatively high resolution and minimised blurring artefacts, and minimised susceptibility artefacts and chemical shift artefacts. Bi-exponential T2 calculation is superior to mono-exponential in terms of statistical significance for the diagnosis of Achilles tendinopathy. • Magnetic resonance imaging offers new insight into healthy and diseased Achilles tendons • Bi-exponential T2 calculation in Achilles tendons is more beneficial than mono-exponential • A short T2 component correlates strongly with clinical score • Variable echo time sequences successfully used instead of ultrashort echo time sequences.
Control logic to track the outputs of a command generator or randomly forced target
NASA Technical Reports Server (NTRS)
Trankle, T. L.; Bryson, A. E., Jr.
1977-01-01
A procedure is presented for synthesizing time-invariant control logic to cause the outputs of a linear plant to track the outputs of an unforced (or randomly forced) linear dynamic system. The control logic uses feed-forward of the reference system state variables and feedback of the plant state variables. The feed-forward gains are obtained from the solution of a linear algebraic matrix equation of the Liapunov type. The feedback gains are the usual regulator gains, determined to stabilize (or augment the stability of) the plant, possibly including integral control. The method is applied here to the design of control logic for a second-order servomechanism to follow a linearly increasing (ramp) signal, an unstable third-order system with two controls to track two separate ramp signals, and a sixth-order system with two controls to track a constant signal and an exponentially decreasing signal (aircraft landing-flare or glide-slope-capture with constant velocity).
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
NASA Astrophysics Data System (ADS)
Xu, Feng; Davis, Anthony B.; Diner, David J.
2016-11-01
A Markov chain formalism is developed for computing the transport of polarized radiation according to Generalized Radiative Transfer (GRT) theory, which was developed recently to account for unresolved random fluctuations of scattering particle density and can also be applied to unresolved spectral variability of gaseous absorption as an improvement over the standard correlated-k method. Using Gamma distribution to describe the probability density function of the extinction or absorption coefficient, a shape parameter a that quantifies the variability is introduced, defined as the mean extinction or absorption coefficient squared divided by its variance. It controls the decay rate of a power-law transmission that replaces the usual exponential Beer-Lambert-Bouguer law. Exponential transmission, hence classic RT, is recovered when a→∞. The new approach is verified to high accuracy against numerical benchmark results obtained with a custom Monte Carlo method. For a<∞, angular reciprocity is violated to a degree that increases with the spatial variability, as observed for finite portions of real-world cloudy scenes. While the degree of linear polarization in liquid water cloudbows, supernumerary bows, and glories is affected by spatial heterogeneity, the positions in scattering angle of these features are relatively unchanged. As a result, a single-scattering model based on the assumption of subpixel homogeneity can still be used to derive droplet size distributions from polarimetric measurements of extended stratocumulus clouds.
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-02-01
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.
Multi-Agent Methods for the Configuration of Random Nanocomputers
NASA Technical Reports Server (NTRS)
Lawson, John W.
2004-01-01
As computational devices continue to shrink, the cost of manufacturing such devices is expected to grow exponentially. One alternative to the costly, detailed design and assembly of conventional computers is to place the nano-electronic components randomly on a chip. The price for such a trivial assembly process is that the resulting chip would not be programmable by conventional means. In this work, we show that such random nanocomputers can be adaptively programmed using multi-agent methods. This is accomplished through the optimization of an associated high dimensional error function. By representing each of the independent variables as a reinforcement learning agent, we are able to achieve convergence must faster than with other methods, including simulated annealing. Standard combinational logic circuits such as adders and multipliers are implemented in a straightforward manner. In addition, we show that the intrinsic flexibility of these adaptive methods allows the random computers to be reconfigured easily, making them reusable. Recovery from faults is also demonstrated.
NASA Astrophysics Data System (ADS)
Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.
2018-03-01
This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.
Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H
2009-10-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Base stock system for patient vs impatient customers with varying demand distribution
NASA Astrophysics Data System (ADS)
Fathima, Dowlath; Uduman, P. Sheik
2013-09-01
An optimal Base-Stock inventory policy for Patient and Impatient Customers using finite-horizon models is examined. The Base stock system for Patient and Impatient customers is a different type of inventory policy. In case of the model I, Base stock for Patient customer case is evaluated using the Truncated Exponential Distribution. The model II involves the study of Base-stock inventory policies for Impatient customer. A study on these systems reveals that the Customers wait until the arrival of the next order or the customers leaves the system which leads to lost sale. In both the models demand during the period [0, t] is taken to be a random variable. In this paper, Truncated Exponential Distribution satisfies the Base stock policy for the patient customer as a continuous model. So far the Base stock for Impatient Customers leaded to a discrete case but, in this paper we have modeled this condition into a continuous case. We justify this approach mathematically and also numerically.
Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E
2018-06-01
This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.
Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C
1994-01-01
The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Davis, Anthony B.; Xu, Feng; Diner, David J.
2018-01-01
We demonstrate the computational advantage gained by introducing non-exponential transmission laws into radiative transfer theory for two specific situations. One is the problem of spatial integration over a large domain where the scattering particles cluster randomly in a medium uniformly filled with an absorbing gas, and only a probabilistic description of the variability is available. The increasingly important application here is passive atmospheric profiling using oxygen absorption in the visible/near-IR spectrum. The other scenario is spectral integration over a region where the absorption cross-section of a spatially uniform gas varies rapidly and widely and, moreover, there are scattering particles embedded in the gas that are distributed uniformly, or not. This comes up in many applications, O2 A-band profiling being just one instance. We bring a common framework to solve these problems both efficiently and accurately that is grounded in the recently developed theory of Generalized Radiative Transfer (GRT). In GRT, the classic exponential law of transmission is replaced by one with a slower power-law decay that accounts for the unresolved spectral or spatial variability. Analytical results are derived in the single-scattering limit that applies to optically thin aerosol layers. In spectral integration, a modest gain in accuracy is obtained. As for spatial integration of near-monochromatic radiance, we find that, although both continuum and in-band radiances are affected by moderate levels of sub-pixel variability, only extreme variability will affect in-band/continuum ratios.
Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.
Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark
2015-09-01
"Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.
A mathematical model for evolution and SETI.
Maccone, Claudio
2011-12-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
Modelling Evolution and SETI Mathematically
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2012-05-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factor increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions constrained between the time axis and the exponential growth curve. Finally, since each lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
A Mathematical Model for Evolution and SETI
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-12-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.
Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T
2010-03-10
Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.
Entropy of spatial network ensembles
NASA Astrophysics Data System (ADS)
Coon, Justin P.; Dettmann, Carl P.; Georgiou, Orestis
2018-04-01
We analyze complexity in spatial network ensembles through the lens of graph entropy. Mathematically, we model a spatial network as a soft random geometric graph, i.e., a graph with two sources of randomness, namely nodes located randomly in space and links formed independently between pairs of nodes with probability given by a specified function (the "pair connection function") of their mutual distance. We consider the general case where randomness arises in node positions as well as pairwise connections (i.e., for a given pair distance, the corresponding edge state is a random variable). Classical random geometric graph and exponential graph models can be recovered in certain limits. We derive a simple bound for the entropy of a spatial network ensemble and calculate the conditional entropy of an ensemble given the node location distribution for hard and soft (probabilistic) pair connection functions. Under this formalism, we derive the connection function that yields maximum entropy under general constraints. Finally, we apply our analytical framework to study two practical examples: ad hoc wireless networks and the US flight network. Through the study of these examples, we illustrate that both exhibit properties that are indicative of nearly maximally entropic ensembles.
Directionality theory and the evolution of body size.
Demetrius, L
2000-12-07
Directionality theory, a dynamic theory of evolution that integrates population genetics with demography, is based on the concept of evolutionary entropy, a measure of the variability in the age of reproducing individuals in a population. The main tenets of the theory are three principles relating the response to the ecological constraints a population experiences, with trends in entropy as the population evolves under mutation and natural selection. (i) Stationary size or fluctuations around a stationary size (bounded growth): a unidirectional increase in entropy; (ii) prolonged episodes of exponential growth (unbounded growth), large population size: a unidirectional decrease in entropy; and (iii) prolonged episodes of exponential growth (unbounded growth), small population size: random, non-directional change in entropy. We invoke these principles, together with an allometric relationship between entropy, and the morphometric variable body size, to provide evolutionary explanations of three empirical patterns pertaining to trends in body size, namely (i) Cope's rule, the tendency towards size increase within phyletic lineages; (ii) the island rule, which pertains to changes in body size that occur as species migrate from mainland populations to colonize island habitats; and (iii) Bergmann's rule, the tendency towards size increase with increasing latitude. The observation that these ecotypic patterns can be explained in terms of the directionality principles for entropy underscores the significance of evolutionary entropy as a unifying concept in forging a link between micro-evolution, the dynamics of gene frequency change, and macro-evolution, dynamic changes in morphometric variables.
Hoeffding Type Inequalities and their Applications in Statistics and Operations Research
NASA Astrophysics Data System (ADS)
Daras, Tryfon
2007-09-01
Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.
Rational decisions, random matrices and spin glasses
NASA Astrophysics Data System (ADS)
Galluccio, Stefano; Bouchaud, Jean-Philippe; Potters, Marc
We consider the problem of rational decision making in the presence of nonlinear constraints. By using tools borrowed from spin glass and random matrix theory, we focus on the portfolio optimisation problem. We show that the number of optimal solutions is generally exponentially large, and each of them is fragile: rationality is in this case of limited use. In addition, this problem is related to spin glasses with Lévy-like (long-ranged) couplings, for which we show that the ground state is not exponentially degenerate.
Discrete-time BAM neural networks with variable delays
NASA Astrophysics Data System (ADS)
Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi
2007-07-01
This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.
Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution
NASA Astrophysics Data System (ADS)
Zhao, Chen; Sichitiu, Mihail L.
Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Verma, Ram U; Seol, Youngsoo
2016-01-01
First a new notion of the random exponential Hanson-Antczak type [Formula: see text]-V-invexity is introduced, which generalizes most of the existing notions in the literature, second a random function [Formula: see text] of the second order is defined, and finally a class of asymptotically sufficient efficiency conditions in semi-infinite multi-objective fractional programming is established. Furthermore, several sets of asymptotic sufficiency results in which various generalized exponential type [Formula: see text]-V-invexity assumptions are imposed on certain vector functions whose components are the individual as well as some combinations of the problem functions are examined and proved. To the best of our knowledge, all the established results on the semi-infinite aspects of the multi-objective fractional programming are new, which is a significantly new emerging field of the interdisciplinary research in nature. We also observed that the investigated results can be modified and applied to several special classes of nonlinear programming problems.
Comparison of several maneuvering target tracking models
NASA Astrophysics Data System (ADS)
McIntyre, Gregory A.; Hintz, Kenneth J.
1998-07-01
The tracking of maneuvering targets is complicated by the fact that acceleration is not directly observable or measurable. Additionally, acceleration can be induced by a variety of sources including human input, autonomous guidance, or atmospheric disturbances. The approaches to tracking maneuvering targets can be divided into two categories both of which assume that the maneuver input command is unknown. One approach is to model the maneuver as a random process. The other approach assumes that the maneuver is not random and that it is either detected or estimated in real time. The random process models generally assume one of two statistical properties, either white noise or an autocorrelated noise. The multiple-model approach is generally used with the white noise model while a zero-mean, exponentially correlated acceleration approach is used with the autocorrelated noise model. The nonrandom approach uses maneuver detection to correct the state estimate or a variable dimension filter to augment the state estimate with an extra state component during a detected maneuver. Another issue with the tracking of maneuvering target is whether to perform the Kalman filter in Polar or Cartesian coordinates. This paper will examine and compare several exponentially correlated acceleration approaches in both Polar and Cartesian coordinates for accuracy and computational complexity. They include the Singer model in both Polar and Cartesian coordinates, the Singer model in Polar coordinates converted to Cartesian coordinates, Helferty's third order rational approximation of the Singer model and the Bar-Shalom and Fortmann model. This paper shows that these models all provide very accurate position estimates with only minor differences in velocity estimates and compares the computational complexity of the models.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA
2009-06-23
A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.
Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun
2014-02-01
We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.
Turbulence hierarchy in a random fibre laser
González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman
2017-01-01
Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
Compact continuous-variable entanglement distillation.
Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A
2012-02-10
We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
NASA Astrophysics Data System (ADS)
Adame, J.; Warzel, S.
2015-11-01
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
Borges, F S; Protachevicz, P R; Lameu, E L; Bonetti, R C; Iarosz, K C; Caldas, I L; Baptista, M S; Batista, A M
2017-06-01
We have studied neuronal synchronisation in a random network of adaptive exponential integrate-and-fire neurons. We study how spiking or bursting synchronous behaviour appears as a function of the coupling strength and the probability of connections, by constructing parameter spaces that identify these synchronous behaviours from measurements of the inter-spike interval and the calculation of the order parameter. Moreover, we verify the robustness of synchronisation by applying an external perturbation to each neuron. The simulations show that bursting synchronisation is more robust than spike synchronisation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Guo, X; Fu, B; Ma, K; Chen, L
2000-08-01
Geostatistics combined with GIS was applied to analyze the spatial variability of soil nutrients in topsoil (0-20 cm) in Zunghua City of Hebei Province. GIS can integrate attribute data with geographical data of system variables, which makes the application of geostatistics technique for large spatial scale more convenient. Soil nutrient data in this study included available N (alkaline hydrolyzing nitrogen), total N, available K, available P and organic matter. The results showed that the semivariograms of soil nutrients were best described by spherical model, except for that of available K, which was best fitted by complex structure of exponential model and linear with sill model. The spatial variability of available K was mainly produced by structural factor, while that of available N, total N, available P and organic matter was primarily caused by random factor. However, their spatial heterogeneity degree was different: the degree of total N and organic matter was higher, and that of available P and available N was lower. The results also indicated that the spatial correlation of the five tested soil nutrients at this large scale was moderately dependent. The ranges of available N and available P were almost same, which were 5 km and 5.5 km, respectively. The range of total N was up to 18 km, and that of organic matter was 8.5 km. For available K, the spatial variability scale primarily expressed exponential model between 0-3.5 km, but linear with sill model between 3.5-25.5 km. In addition, five soil nutrients exhibited different isotropic ranges. Available N and available P were isotropic through the whole research range (0-28 km). The isotropic range of available K was 0-8 km, and that of total N and organic matter was 0-10 km.
Lyapunov exponents for one-dimensional aperiodic photonic bandgap structures
NASA Astrophysics Data System (ADS)
Kissel, Glen J.
2011-10-01
Existing in the "gray area" between perfectly periodic and purely randomized photonic bandgap structures are the socalled aperoidic structures whose layers are chosen according to some deterministic rule. We consider here a onedimensional photonic bandgap structure, a quarter-wave stack, with the layer thickness of one of the bilayers subject to being either thin or thick according to five deterministic sequence rules and binary random selection. To produce these aperiodic structures we examine the following sequences: Fibonacci, Thue-Morse, Period doubling, Rudin-Shapiro, as well as the triadic Cantor sequence. We model these structures numerically with a long chain (approximately 5,000,000) of transfer matrices, and then use the reliable algorithm of Wolf to calculate the (upper) Lyapunov exponent for the long product of matrices. The Lyapunov exponent is the statistically well-behaved variable used to characterize the Anderson localization effect (exponential confinement) when the layers are randomized, so its calculation allows us to more precisely compare the purely randomized structure with its aperiodic counterparts. It is found that the aperiodic photonic systems show much fine structure in their Lyapunov exponents as a function of frequency, and, in a number of cases, the exponents are quite obviously fractal.
Motor Variability Arises from a Slow Random Walk in Neural State
Chaisanguanthum, Kris S.; Shen, Helen H.
2014-01-01
Even well practiced movements cannot be repeated without variability. This variability is thought to reflect “noise” in movement preparation or execution. However, we show that, for both professional baseball pitchers and macaque monkeys making reaching movements, motor variability can be decomposed into two statistical components, a slowly drifting mean and fast trial-by-trial fluctuations about the mean. The preparatory activity of dorsal premotor cortex/primary motor cortex neurons in monkey exhibits similar statistics. Although the neural and behavioral drifts appear to be correlated, neural activity does not account for trial-by-trial fluctuations in movement, which must arise elsewhere, likely downstream. The statistics of this drift are well modeled by a double-exponential autocorrelation function, with time constants similar across the neural and behavioral drifts in two monkeys, as well as the drifts observed in baseball pitching. These time constants can be explained by an error-corrective learning processes and agree with learning rates measured directly in previous experiments. Together, these results suggest that the central contributions to movement variability are not simply trial-by-trial fluctuations but are rather the result of longer-timescale processes that may arise from motor learning. PMID:25186752
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Dynamical Localization for Discrete Anderson Dirac Operators
NASA Astrophysics Data System (ADS)
Prado, Roberto A.; de Oliveira, César R.; Carvalho, Silas L.
2017-04-01
We establish dynamical localization for random Dirac operators on the d-dimensional lattice, with d\\in { 1, 2, 3} , in the three usual regimes: large disorder, band edge and 1D. These operators are discrete versions of the continuous Dirac operators and consist in the sum of a discrete free Dirac operator with a random potential. The potential is a diagonal matrix formed by different scalar potentials, which are sequences of independent and identically distributed random variables according to an absolutely continuous probability measure with bounded density and of compact support. We prove the exponential decay of fractional moments of the Green function for such models in each of the above regimes, i.e., (j) throughout the spectrum at larger disorder, (jj) for energies near the band edges at arbitrary disorder and (jjj) in dimension one, for all energies in the spectrum and arbitrary disorder. Dynamical localization in theses regimes follows from the fractional moments method. The result in the one-dimensional regime contrast with one that was previously obtained for 1D Dirac model with Bernoulli potential.
NASA Astrophysics Data System (ADS)
Yao, Deyin; Lu, Renquan; Xu, Yong; Ren, Hongru
2017-10-01
In this paper, the sliding mode control problem of Markov jump systems (MJSs) with unmeasured state, partly unknown transition rates and random sensor delays is probed. In the practical engineering control, the exact information of transition rates is hard to obtain and the measurement channel is supposed to subject to random sensor delay. Design a Luenberger observer to estimate the unmeasured system state, and an integral sliding mode surface is constructed to ensure the exponential stability of MJSs. A sliding mode controller based on estimator is proposed to drive the system state onto the sliding mode surface and render the sliding mode dynamics exponentially mean-square stable with H∞ performance index. Finally, simulation results are provided to illustrate the effectiveness of the proposed results.
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field
NASA Astrophysics Data System (ADS)
Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi
2018-02-01
We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.
Fast self contained exponential random deviate algorithm
NASA Astrophysics Data System (ADS)
Fernández, Julio F.
1997-03-01
An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.
Asymptotic Equivalence of Probability Measures and Stochastic Processes
NASA Astrophysics Data System (ADS)
Touchette, Hugo
2018-03-01
Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
A guidance and navigation system for continuous low-thrust vehicles. M.S. Thesis
NASA Technical Reports Server (NTRS)
Jack-Chingtse, C.
1973-01-01
A midcourse guidance and navigation system for continuous low thrust vehicles was developed. The equinoctial elements are the state variables. Uncertainties are modelled statistically by random vector and stochastic processes. The motion of the vehicle and the measurements are described by nonlinear stochastic differential and difference equations respectively. A minimum time trajectory is defined; equations of motion and measurements are linearized about this trajectory. An exponential cost criterion is constructed and a linear feedback quidance law is derived. An extended Kalman filter is used for state estimation. A short mission using this system is simulated. It is indicated that this system is efficient for short missions, but longer missions require accurate trajectory and ground based measurements.
NASA Astrophysics Data System (ADS)
Vaninsky, Alexander
2015-04-01
Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.
NASA Astrophysics Data System (ADS)
Pecháček, T.; Goosmann, R. W.; Karas, V.; Czerny, B.; Dovčiak, M.
2013-08-01
Context. We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. Aims: We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. Methods: We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermore, we considered the influence of the modulation by the orbital motion for a variability pattern assumed to result from an orbiting-spot model. Results: We suggest that our general methodology can be useful for describing non-monotonic PSD profiles (such as the trend seen, on different scales, in exemplary cases of the high-mass X-ray binary Cygnus X-1 and the narrow-line Seyfert galaxy Ark 564). We adopt a model where these power spectra are reproduced as a superposition of several Lorentzians with varying amplitudes in the X-ray-band light curve. Our general approach can help in constraining the model parameters and in determining which parts of the parameter space are accessible under various circumstances.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Discrete-time bidirectional associative memory neural networks with variable delays
NASA Astrophysics Data System (ADS)
Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.
2005-02-01
Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.
A non-Gaussian option pricing model based on Kaniadakis exponential deformation
NASA Astrophysics Data System (ADS)
Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara
2017-09-01
A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.
2012-09-01
used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
NASA Astrophysics Data System (ADS)
Sposini, Vittoria; Chechkin, Aleksei V.; Seno, Flavio; Pagnini, Gianni; Metzler, Ralf
2018-04-01
A considerable number of systems have recently been reported in which Brownian yet non-Gaussian dynamics was observed. These are processes characterised by a linear growth in time of the mean squared displacement, yet the probability density function of the particle displacement is distinctly non-Gaussian, and often of exponential (Laplace) shape. This apparently ubiquitous behaviour observed in very different physical systems has been interpreted as resulting from diffusion in inhomogeneous environments and mathematically represented through a variable, stochastic diffusion coefficient. Indeed different models describing a fluctuating diffusivity have been studied. Here we present a new view of the stochastic basis describing time-dependent random diffusivities within a broad spectrum of distributions. Concretely, our study is based on the very generic class of the generalised Gamma distribution. Two models for the particle spreading in such random diffusivity settings are studied. The first belongs to the class of generalised grey Brownian motion while the second follows from the idea of diffusing diffusivities. The two processes exhibit significant characteristics which reproduce experimental results from different biological and physical systems. We promote these two physical models for the description of stochastic particle motion in complex environments.
Numerical investigation of MHD flow with Soret and Dufour effect
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Nasir, Tehreem; Khan, Muhammad Ijaz; Alsaedi, Ahmed
2018-03-01
This paper describes the flow due to an exponentially curved surface subject to Soret and Dufour effects. Nonlinear velocity is considered. Exponentially curved stretchable sheet induced the flow. Fluid is electrical conducting through constant applied magnetic field. The governing flow expressions are reduced to ordinary ones and then tackled by numerical technique (Built-in-Shooting). Impacts of various flow variables on the dimensionless velocity, concentration and temperature fields are graphically presented and discussed in detail. Skin friction coefficient and Sherwood and Nusselt numbers are studied through graphs. Furthermore it is observed that Soret and Dufour variables regulate heat and mass transfer rates. It is also noteworthy that velocity decays for higher magnetic variable. Skin friction magnitude decays via curvature and magnetic variables. Also mass transfer gradient or rate of mass transport enhances for higher estimations of curvature parameter and Schmidt number.
Intermittent Lagrangian velocities and accelerations in three-dimensional porous medium flow.
Holzner, M; Morales, V L; Willmann, M; Dentz, M
2015-07-01
Intermittency of Lagrangian velocity and acceleration is a key to understanding transport in complex systems ranging from fluid turbulence to flow in porous media. High-resolution optical particle tracking in a three-dimensional (3D) porous medium provides detailed 3D information on Lagrangian velocities and accelerations. We find sharp transitions close to pore throats, and low flow variability in the pore bodies, which gives rise to stretched exponential Lagrangian velocity and acceleration distributions characterized by a sharp peak at low velocity, superlinear evolution of particle dispersion, and double-peak behavior in the propagators. The velocity distribution is quantified in terms of pore geometry and flow connectivity, which forms the basis for a continuous-time random-walk model that sheds light on the observed Lagrangian flow and transport behaviors.
Maji, Kaushik; Kouri, Donald J
2011-03-28
We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
Universality of accelerating change
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Shlesinger, Michael F.
2018-03-01
On large time scales the progress of human technology follows an exponential growth trend that is termed accelerating change. The exponential growth trend is commonly considered to be the amalgamated effect of consecutive technology revolutions - where the progress carried in by each technology revolution follows an S-curve, and where the aging of each technology revolution drives humanity to push for the next technology revolution. Thus, as a collective, mankind is the 'intelligent designer' of accelerating change. In this paper we establish that the exponential growth trend - and only this trend - emerges universally, on large time scales, from systems that combine together two elements: randomness and amalgamation. Hence, the universal generation of accelerating change can be attained by systems with no 'intelligent designer'.
Intervention-Based Stochastic Disease Eradication
NASA Astrophysics Data System (ADS)
Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira
2013-03-01
Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204
Polynomials with Restricted Coefficients and Their Applications
1987-01-01
sums of exponentials of quadratics, he reduced such ýzums to exponentials of linears (geometric sums!) by simplg multiplying by their conjugates...n, the same algebraic manipulations as before lead to rn V`-~ v ie ? --8-- el4V’ .fk ts with = a+(2r+l)t, A = a+(2r+2m+l)t. To estimate the right...coefficients. These random polynomials represent the deviation in frequency response of a linear , equispaced antenna array cauised by coefficient
Rapid growth of seed black holes in the early universe by supra-exponential accretion.
Alexander, Tal; Natarajan, Priyamvada
2014-09-12
Mass accretion by black holes (BHs) is typically capped at the Eddington rate, when radiation's push balances gravity's pull. However, even exponential growth at the Eddington-limited e-folding time t(E) ~ few × 0.01 billion years is too slow to grow stellar-mass BH seeds into the supermassive luminous quasars that are observed when the universe is 1 billion years old. We propose a dynamical mechanism that can trigger supra-exponential accretion in the early universe, when a BH seed is bound in a star cluster fed by the ubiquitous dense cold gas flows. The high gas opacity traps the accretion radiation, while the low-mass BH's random motions suppress the formation of a slowly draining accretion disk. Supra-exponential growth can thus explain the puzzling emergence of supermassive BHs that power luminous quasars so soon after the Big Bang. Copyright © 2014, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Sonam, Sonam; Jain, Vikrant
2017-04-01
River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.
Two-key concurrent responding: response-reinforcement dependencies and blackouts1
Herbert, Emily W.
1970-01-01
Two-key concurrent responding was maintained for three pigeons by a single variable-interval 1-minute schedule of reinforcement in conjunction with a random number generator that assigned feeder operations between keys with equal probability. The duration of blackouts was varied between keys when each response initiated a blackout, and grain arranged by the variable-interval schedule was automatically presented after a blackout (Exp. I). In Exp. II every key peck, except for those that produced grain, initiated a blackout, and grain was dependent upon a response following a blackout. For each pigeon in Exp. I and for one pigeon in Exp. II, the relative frequency of responding on a key approximated, i.e., matched, the relative reciprocal of the duration of the blackout interval on that key. In a third experiment, blackouts scheduled on a variable-interval were of equal duration on the two keys. For one key, grain automatically followed each blackout; for the other key, grain was dependent upon a response and never followed a blackout. The relative frequency of responding on the former key, i.e., the delay key, better approximated the negative exponential function obtained by Chung (1965) than the matching function predicted by Chung and Herrnstein (1967). PMID:16811458
Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E
2004-05-01
The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.
Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.
Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng
2013-01-01
Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.
Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S
2016-11-01
For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Cell Division and Evolution of Biological Tissues
NASA Astrophysics Data System (ADS)
Rivier, Nicolas; Arcenegui-Siemens, Xavier; Schliecker, Gudrun
A tissue is a geometrical, space-filling, random cellular network; it remains in this steady state while individual cells divide. Cell division (fragmentation) is a local, elementary topological transformation which establishes statistical equilibrium of the structure. Statistical equilibrium is characterized by observable relations (Lewis, Aboav) between cell shapes, sizes and those of their neighbours, obtained through maximum entropy and topological correlation extending to nearest neighbours only, i.e. maximal randomness. For a two-dimensional tissue (epithelium), the distribution of cell shapes and that of mother and daughter cells can be obtained from elementary geometrical and physical arguments, except for an exponential factor favouring division of larger cells, and exponential and combinatorial factors encouraging a most symmetric division. The resulting distributions are very narrow, and stationarity severely restricts the range of an adjustable structural parameter
Transfer potentials shape and equilibrate monetary systems
NASA Astrophysics Data System (ADS)
Fischer, Robert; Braun, Dieter
2003-04-01
We analyze a monetary system of random money transfer on the basis of double entry bookkeeping. Without boundary conditions, we do not reach a price equilibrium and violate text-book formulas of economist's quantity theory ( MV= PQ). To match the resulting quantity of money with the model assumption of a constant price, we have to impose boundary conditions. They either restrict specific transfers globally or impose transfers locally. Both connect through a general framework of transfer potentials. We show that either restricted or imposed transfers can shape Gaussian, tent-shape exponential, Boltzmann-exponential, pareto or periodic equilibrium distributions. We derive the master equation and find its general time-dependent approximate solution. An equivalent of quantity theory for random money transfer under the boundary conditions of transfer potentials is given.
The Supermarket Model with Bounded Queue Lengths in Equilibrium
NASA Astrophysics Data System (ADS)
Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.
2018-04-01
In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.
Roosting habitat use and selection by northern spotted owls during natal dispersal
Sovern, Stan G.; Forsman, Eric D.; Dugger, Catherine M.; Taylor, Margaret
2015-01-01
We studied habitat selection by northern spotted owls (Strix occidentalis caurina) during natal dispersal in Washington State, USA, at both the roost site and landscape scales. We used logistic regression to obtain parameters for an exponential resource selection function based on vegetation attributes in roost and random plots in 76 forest stands that were used for roosting. We used a similar analysis to evaluate selection of landscape habitat attributes based on 301 radio-telemetry relocations and random points within our study area. We found no evidence of within-stand selection for any of the variables examined, but 78% of roosts were in stands with at least some large (>50 cm dbh) trees. At the landscape scale, owls selected for stands with high canopy cover (>70%). Dispersing owls selected vegetation types that were more similar to habitat selected by adult owls than habitat that would result from following guidelines previously proposed to maintain dispersal habitat. Our analysis indicates that juvenile owls select stands for roosting that have greater canopy cover than is recommended in current agency guidelines.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Probability Distributions for Random Quantum Operations
NASA Astrophysics Data System (ADS)
Schultz, Kevin
Motivated by uncertainty quantification and inference of quantum information systems, in this work we draw connections between the notions of random quantum states and operations in quantum information with probability distributions commonly encountered in the field of orientation statistics. This approach identifies natural sample spaces and probability distributions upon these spaces that can be used in the analysis, simulation, and inference of quantum information systems. The theory of exponential families on Stiefel manifolds provides the appropriate generalization to the classical case. Furthermore, this viewpoint motivates a number of additional questions into the convex geometry of quantum operations relative to both the differential geometry of Stiefel manifolds as well as the information geometry of exponential families defined upon them. In particular, we draw on results from convex geometry to characterize which quantum operations can be represented as the average of a random quantum operation. This project was supported by the Intelligence Advanced Research Projects Activity via Department of Interior National Business Center Contract Number 2012-12050800010.
Gonzalez-Vazquez, J P; Anta, Juan A; Bisquert, Juan
2009-11-28
The random walk numerical simulation (RWNS) method is used to compute diffusion coefficients for hopping transport in a fully disordered medium at finite carrier concentrations. We use Miller-Abrahams jumping rates and an exponential distribution of energies to compute the hopping times in the random walk simulation. The computed diffusion coefficient shows an exponential dependence with respect to Fermi-level and Arrhenius behavior with respect to temperature. This result indicates that there is a well-defined transport level implicit to the system dynamics. To establish the origin of this transport level we construct histograms to monitor the energies of the most visited sites. In addition, we construct "corrected" histograms where backward moves are removed. Since these moves do not contribute to transport, these histograms provide a better estimation of the effective transport level energy. The analysis of this concept in connection with the Fermi-level dependence of the diffusion coefficient and the regime of interest for the functioning of dye-sensitised solar cells is thoroughly discussed.
Large deviations and mixing for dissipative PDEs with unbounded random kicks
NASA Astrophysics Data System (ADS)
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
Kumar, Sanjeev; Karmeshu
2018-04-01
A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.
Stability of the Markov operator and synchronization of Markovian random products
NASA Astrophysics Data System (ADS)
Díaz, Lorenzo J.; Matias, Edgar
2018-05-01
We study Markovian random products on a large class of ‘m-dimensional’ connected compact metric spaces (including products of closed intervals and trees). We introduce a splitting condition, generalizing the classical one by Dubins and Freedman, and prove that this condition implies the asymptotic stability of the corresponding Markov operator and (exponentially fast) synchronization.
NASA Astrophysics Data System (ADS)
Dalkilic, Turkan Erbay; Apaydin, Aysen
2009-11-01
In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future
NASA Astrophysics Data System (ADS)
Hüsler, A. D.; Sornette, D.
2014-10-01
We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.
The computational core and fixed point organization in Boolean networks
NASA Astrophysics Data System (ADS)
Correale, L.; Leone, M.; Pagnani, A.; Weigt, M.; Zecchina, R.
2006-03-01
In this paper, we analyse large random Boolean networks in terms of a constraint satisfaction problem. We first develop an algorithmic scheme which allows us to prune simple logical cascades and underdetermined variables, returning thereby the computational core of the network. Second, we apply the cavity method to analyse the number and organization of fixed points. We find in particular a phase transition between an easy and a complex regulatory phase, the latter being characterized by the existence of an exponential number of macroscopically separated fixed point clusters. The different techniques developed are reinterpreted as algorithms for the analysis of single Boolean networks, and they are applied in the analysis of and in silico experiments on the gene regulatory networks of baker's yeast (Saccharomyces cerevisiae) and the segment-polarity genes of the fruitfly Drosophila melanogaster.
Solar F10.7 radiation - A short term model for Space Station applications
NASA Technical Reports Server (NTRS)
Vedder, John D.; Tabor, Jill L.
1991-01-01
A new method is described for statistically modeling the F10.7 component of solar radiation for 91-day intervals. The resulting model represents this component of the solar flux as a quasi-exponentially correlated, Weibull distributed random variable, and thereby demonstrates excellent agreement with observed F10.7 data. Values of the F10.7 flux are widely used in models of the earth's upper atmosphere because of its high correlation with density fluctuations due to solar heating effects. Because of the direct relation between atmospheric density and drag, a realistic model of the short term fluctuation of the F10.7 flux is important for the design and operation of Space Station Freedom. The method of modeling this flux described in this report should therefore be useful for a variety of Space Station applications.
NASA Astrophysics Data System (ADS)
Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang
2014-07-01
This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.
Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello
2018-04-22
A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.
In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.
Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A
2018-04-01
Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.
Exploration properties of biased evanescent random walkers on a one-dimensional lattice
NASA Astrophysics Data System (ADS)
Esguerra, Jose Perico; Reyes, Jelian
2017-08-01
We investigate the combined effects of bias and evanescence on the characteristics of random walks on a one-dimensional lattice. We calculate the time-dependent return probability, eventual return probability, conditional mean return time, and the time-dependent mean number of visited sites of biased immortal and evanescent discrete-time random walkers on a one-dimensional lattice. We then extend the calculations to the case of a continuous-time step-coupled biased evanescent random walk on a one-dimensional lattice with an exponential waiting time distribution.
Robust reliable sampled-data control for switched systems with application to flight control
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.
2016-11-01
This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.
Jiao, Can; Wang, Ting; Liu, Jianxin; Wu, Huanjie; Cui, Fang; Peng, Xiaozhe
2017-01-01
The influences of peer relationships on adolescent subjective well-being were investigated within the framework of social network analysis, using exponential random graph models as a methodological tool. The participants in the study were 1,279 students (678 boys and 601 girls) from nine junior middle schools in Shenzhen, China. The initial stage of the research used a peer nomination questionnaire and a subjective well-being scale (used in previous studies) to collect data on the peer relationship networks and the subjective well-being of the students. Exponential random graph models were then used to explore the relationships between students with the aim of clarifying the character of the peer relationship networks and the influence of peer relationships on subjective well being. The results showed that all the adolescent peer relationship networks in our investigation had positive reciprocal effects, positive transitivity effects and negative expansiveness effects. However, none of the relationship networks had obvious receiver effects or leaders. The adolescents in partial peer relationship networks presented similar levels of subjective well-being on three dimensions (satisfaction with life, positive affects and negative affects) though not all network friends presented these similarities. The study shows that peer networks can affect an individual's subjective well-being. However, whether similarities among adolescents are the result of social influences or social choices needs further exploration, including longitudinal studies that investigate the potential processes of subjective well-being similarities among adolescents.
Jiao, Can; Wang, Ting; Liu, Jianxin; Wu, Huanjie; Cui, Fang; Peng, Xiaozhe
2017-01-01
The influences of peer relationships on adolescent subjective well-being were investigated within the framework of social network analysis, using exponential random graph models as a methodological tool. The participants in the study were 1,279 students (678 boys and 601 girls) from nine junior middle schools in Shenzhen, China. The initial stage of the research used a peer nomination questionnaire and a subjective well-being scale (used in previous studies) to collect data on the peer relationship networks and the subjective well-being of the students. Exponential random graph models were then used to explore the relationships between students with the aim of clarifying the character of the peer relationship networks and the influence of peer relationships on subjective well being. The results showed that all the adolescent peer relationship networks in our investigation had positive reciprocal effects, positive transitivity effects and negative expansiveness effects. However, none of the relationship networks had obvious receiver effects or leaders. The adolescents in partial peer relationship networks presented similar levels of subjective well-being on three dimensions (satisfaction with life, positive affects and negative affects) though not all network friends presented these similarities. The study shows that peer networks can affect an individual’s subjective well-being. However, whether similarities among adolescents are the result of social influences or social choices needs further exploration, including longitudinal studies that investigate the potential processes of subjective well-being similarities among adolescents. PMID:28450845
NASA Astrophysics Data System (ADS)
Gireesha, B. J.; Kumar, P. B. Sampath; Mahanthesh, B.; Shehzad, S. A.; Abbasi, F. M.
2018-05-01
The nonlinear convective flow of kerosene-Alumina nanoliquid subjected to an exponential space dependent heat source and temperature dependent viscosity is investigated here. This study is focuses on augmentation of heat transport rate in liquid propellant rocket engine. The kerosene-Alumina nanoliquid is considered as the regenerative coolant. Aspects of radiation and viscous dissipation are also covered. Relevant nonlinear system is solved numerically via RK based shooting scheme. Diverse flow fields are computed and examined for distinct governing variables. We figured out that the nanoliquid's temperature increased due to space dependent heat source and radiation aspects. The heat transfer rate is higher in case of changeable viscosity than constant viscosity.
NASA Astrophysics Data System (ADS)
Gireesha, B. J.; Kumar, P. B. Sampath; Mahanthesh, B.; Shehzad, S. A.; Abbasi, F. M.
2018-02-01
The nonlinear convective flow of kerosene-Alumina nanoliquid subjected to an exponential space dependent heat source and temperature dependent viscosity is investigated here. This study is focuses on augmentation of heat transport rate in liquid propellant rocket engine. The kerosene-Alumina nanoliquid is considered as the regenerative coolant. Aspects of radiation and viscous dissipation are also covered. Relevant nonlinear system is solved numerically via RK based shooting scheme. Diverse flow fields are computed and examined for distinct governing variables. We figured out that the nanoliquid's temperature increased due to space dependent heat source and radiation aspects. The heat transfer rate is higher in case of changeable viscosity than constant viscosity.
ERIC Educational Resources Information Center
Vaurio, Rebecca G.; Simmonds, Daniel J.; Mostofsky, Stewart H.
2009-01-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the…
Local dependence in random graph models: characterization, properties and statistical inference
Schweinberger, Michael; Handcock, Mark S.
2015-01-01
Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
An analytic solution of the stochastic storage problem applicable to soil water
Milly, P.C.D.
1993-01-01
The accumulation of soil water during rainfall events and the subsequent depletion of soil water by evaporation between storms can be described, to first order, by simple accounting models. When the alternating supplies (precipitation) and demands (potential evaporation) are viewed as random variables, it follows that soil-water storage, evaporation, and runoff are also random variables. If the forcing (supply and demand) processes are stationary for a sufficiently long period of time, an asymptotic regime should eventually be reached where the probability distribution functions of storage, evaporation, and runoff are stationary and uniquely determined by the distribution functions of the forcing. Under the assumptions that the potential evaporation rate is constant, storm arrivals are Poisson-distributed, rainfall is instantaneous, and storm depth follows an exponential distribution, it is possible to derive the asymptotic distributions of storage, evaporation, and runoff analytically for a simple balance model. A particular result is that the fraction of rainfall converted to runoff is given by (1 - R−1)/(eα(1−R−1) − R−1), in which R is the ratio of mean potential evaporation to mean rainfall and a is the ratio of soil water-holding capacity to mean storm depth. The problem considered here is analogous to the well-known problem of storage in a reservoir behind a dam, for which the present work offers a new solution for reservoirs of finite capacity. A simple application of the results of this analysis suggests that random, intraseasonal fluctuations of precipitation cannot by themselves explain the observed dependence of the annual water balance on annual totals of precipitation and potential evaporation.
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
NASA Astrophysics Data System (ADS)
Nerantzaki, Sofia; Papalexiou, Simon Michael
2017-04-01
Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.
Improved result on stability analysis of discrete stochastic neural networks with time delay
NASA Astrophysics Data System (ADS)
Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng
2009-04-01
This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.
Dynamic stability of passive dynamic walking on an irregular surface.
Su, Jimmy Li-Shin; Dingwell, Jonathan B
2007-12-01
Falls that occur during walking are a significant health problem. One of the greatest impediments to solve this problem is that there is no single obviously "correct" way to quantify walking stability. While many people use variability as a proxy for stability, measures of variability do not quantify how the locomotor system responds to perturbations. The purpose of this study was to determine how changes in walking surface variability affect changes in both locomotor variability and stability. We modified an irreducibly simple model of walking to apply random perturbations that simulated walking over an irregular surface. Because the model's global basin of attraction remained fixed, increasing the amplitude of the applied perturbations directly increased the risk of falling in the model. We generated ten simulations of 300 consecutive strides of walking at each of six perturbation amplitudes ranging from zero (i.e., a smooth continuous surface) up to the maximum level the model could tolerate without falling over. Orbital stability defines how a system responds to small (i.e., "local") perturbations from one cycle to the next and was quantified by calculating the maximum Floquet multipliers for the model. Local stability defines how a system responds to similar perturbations in real time and was quantified by calculating short-term and long-term local exponential rates of divergence for the model. As perturbation amplitudes increased, no changes were seen in orbital stability (r(2)=2.43%; p=0.280) or long-term local instability (r(2)=1.0%; p=0.441). These measures essentially reflected the fact that the model never actually "fell" during any of our simulations. Conversely, the variability of the walker's kinematics increased exponentially (r(2)>or=99.6%; p<0.001) and short-term local instability increased linearly (r(2)=88.1%; p<0.001). These measures thus predicted the increased risk of falling exhibited by the model. For all simulated conditions, the walker remained orbitally stable, while exhibiting substantial local instability. This was because very small initial perturbations diverged away from the limit cycle, while larger initial perturbations converged toward the limit cycle. These results provide insight into how these different proposed measures of walking stability are related to each other and to risk of falling.
Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco
2014-01-01
Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.
Penna Bit-String Model with Constant Population
NASA Astrophysics Data System (ADS)
de Oliveira, P. M. C.; de Oliveira, S. Moss; Sá Martins, J. S.
We removed from the Penna model for biological aging any random killing Verhulst factor. Deaths are due only to genetic diseases and the population size is fixed, instead of fluctuating around some constant value. We show that these modifications give qualitatively the same results obtained in an earlier paper, where the random killings (used to avoid an exponential increase of the population) were applied only to newborns.
Time Course of Visual Extrapolation Accuracy
1995-09-01
The pond and duckweed problem: Three experiments on the misperception of exponential growth . Acta Psychologica 43, 239-251. Wiener, E.L., 1962...random variation in tracker velocity. Both models predicted changes in hit and false alarm rates well, except in a condition where response asymmetries...systematic velocity error in tracking, only random variation in tracker velocity. Both models predicted changes in hit and false alarm rates well
NASA Astrophysics Data System (ADS)
Rana, B. M. Jewel; Ahmed, Rubel; Ahmmed, S. F.
2017-06-01
An analysis is carried out to investigate the effects of variable viscosity, thermal radiation, absorption of radiation and cross diffusion past an inclined exponential accelerated plate under the influence of variable heat and mass transfer. A set of suitable transformations has been used to obtain the non-dimensional coupled governing equations. Explicit finite difference technique has been used to solve the obtained numerical solutions of the present problem. Stability and convergence of the finite difference scheme have been carried out for this problem. Compaq Visual Fortran 6.6a has been used to calculate the numerical results. The effects of various physical parameters on the fluid velocity, temperature, concentration, coefficient of skin friction, rate of heat transfer, rate of mass transfer, streamlines and isotherms on the flow field have been presented graphically and discussed in details.
1994-01-01
Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085
Galland, Paul
2002-09-01
The quantitative relation between gravitropism and phototropism was analyzed for light-grown coleoptiles of Avena sativa (L.). With respect to gravitropism the coleoptiles obeyed the sine law. To study the interaction between light and gravity, coleoptiles were inclined at variable angles and irradiated for 7 h with unilateral blue light (466 nm) impinging at right angles relative to the axis of the coleoptile. The phototropic stimulus was applied from the side opposite to the direction of gravitropic bending. The fluence rate that was required to counteract the negative gravitropism increased exponentially with the sine of the inclination angle. To achieve balance, a linear increase in the gravitropic stimulus required compensation by an exponential increase in the counteracting phototropic stimulus. The establishment of photogravitropic equilibrium during continuous unilateral irradiation is thus determined by two different laws: the well-known sine law for gravitropism and a novel exponential law for phototropism described in this work.
Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea
2014-01-01
In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061
NASA Astrophysics Data System (ADS)
Laforest, Martin
Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for single and multi qubit systems. Even though liquid state NMR is argued to be unsuitable for scalable quantum information processing, it remains the best test-bed system to experimentally implement, verify and develop protocols aimed at increasing the control over general quantum information processors. For this reason, all the protocols described in this thesis have been implemented in liquid state NMR, which then led to further development of control and analysis techniques.
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
NASA Astrophysics Data System (ADS)
Bera, Debajyoti
2015-06-01
One of the early achievements of quantum computing was demonstrated by Deutsch and Jozsa (Proc R Soc Lond A Math Phys Sci 439(1907):553, 1992) regarding classification of a particular type of Boolean functions. Their solution demonstrated an exponential speedup compared to classical approaches to the same problem; however, their solution was the only known quantum algorithm for that specific problem so far. This paper demonstrates another quantum algorithm for the same problem, with the same exponential advantage compared to classical algorithms. The novelty of this algorithm is the use of quantum amplitude amplification, a technique that is the key component of another celebrated quantum algorithm developed by Grover (Proceedings of the twenty-eighth annual ACM symposium on theory of computing, ACM Press, New York, 1996). A lower bound for randomized (classical) algorithms is also presented which establishes a sound gap between the effectiveness of our quantum algorithm and that of any randomized algorithm with similar efficiency.
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
Crowding Effects in Vehicular Traffic
Combinido, Jay Samuel L.; Lim, May T.
2012-01-01
While the impact of crowding on the diffusive transport of molecules within a cell is widely studied in biology, it has thus far been neglected in traffic systems where bulk behavior is the main concern. Here, we study the effects of crowding due to car density and driving fluctuations on the transport of vehicles. Using a microscopic model for traffic, we found that crowding can push car movement from a superballistic down to a subdiffusive state. The transition is also associated with a change in the shape of the probability distribution of positions from a negatively-skewed normal to an exponential distribution. Moreover, crowding broadens the distribution of cars’ trap times and cluster sizes. At steady state, the subdiffusive state persists only when there is a large variability in car speeds. We further relate our work to prior findings from random walk models of transport in cellular systems. PMID:23139762
A guidance and navigation system for continuous low thrust vehicles. M.S. Thesis
NASA Technical Reports Server (NTRS)
Tse, C. J. C.
1973-01-01
A midcourse guidance and navigation system for continuous low thrust vehicles is described. A set of orbit elements, known as the equinoctial elements, are selected as the state variables. The uncertainties are modelled statistically by random vector and stochastic processes. The motion of the vehicle and the measurements are described by nonlinear stochastic differential and difference equations respectively. A minimum time nominal trajectory is defined and the equation of motion and the measurement equation are linearized about this nominal trajectory. An exponential cost criterion is constructed and a linear feedback guidance law is derived to control the thrusting direction of the engine. Using this guidance law, the vehicle will fly in a trajectory neighboring the nominal trajectory. The extended Kalman filter is used for state estimation. Finally a short mission using this system is simulated. The results indicate that this system is very efficient for short missions.
Network Ecology and Adolescent Social Structure
McFarland, Daniel A.; Moody, James; Diehl, David; Smith, Jeffrey A.; Thomas, Reuben J.
2014-01-01
Adolescent societies—whether arising from weak, short-term classroom friendships or from close, long-term friendships—exhibit various levels of network clustering, segregation, and hierarchy. Some are rank-ordered caste systems and others are flat, cliquish worlds. Explaining the source of such structural variation remains a challenge, however, because global network features are generally treated as the agglomeration of micro-level tie-formation mechanisms, namely balance, homophily, and dominance. How do the same micro-mechanisms generate significant variation in global network structures? To answer this question we propose and test a network ecological theory that specifies the ways features of organizational environments moderate the expression of tie-formation processes, thereby generating variability in global network structures across settings. We develop this argument using longitudinal friendship data on schools (Add Health study) and classrooms (Classroom Engagement study), and by extending exponential random graph models to the study of multiple societies over time. PMID:25535409
Repressing the effects of variable speed harmonic orders in operational modal analysis
NASA Astrophysics Data System (ADS)
Randall, R. B.; Coats, M. D.; Smith, W. A.
2016-10-01
Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.
Network Ecology and Adolescent Social Structure.
McFarland, Daniel A; Moody, James; Diehl, David; Smith, Jeffrey A; Thomas, Reuben J
2014-12-01
Adolescent societies-whether arising from weak, short-term classroom friendships or from close, long-term friendships-exhibit various levels of network clustering, segregation, and hierarchy. Some are rank-ordered caste systems and others are flat, cliquish worlds. Explaining the source of such structural variation remains a challenge, however, because global network features are generally treated as the agglomeration of micro-level tie-formation mechanisms, namely balance, homophily, and dominance. How do the same micro-mechanisms generate significant variation in global network structures? To answer this question we propose and test a network ecological theory that specifies the ways features of organizational environments moderate the expression of tie-formation processes, thereby generating variability in global network structures across settings. We develop this argument using longitudinal friendship data on schools (Add Health study) and classrooms (Classroom Engagement study), and by extending exponential random graph models to the study of multiple societies over time.
Dynamical Localization for Unitary Anderson Models
NASA Astrophysics Data System (ADS)
Hamza, Eman; Joye, Alain; Stolz, Günter
2009-11-01
This paper establishes dynamical localization properties of certain families of unitary random operators on the d-dimensional lattice in various regimes. These operators are generalizations of one-dimensional physical models of quantum transport and draw their name from the analogy with the discrete Anderson model of solid state physics. They consist in a product of a deterministic unitary operator and a random unitary operator. The deterministic operator has a band structure, is absolutely continuous and plays the role of the discrete Laplacian. The random operator is diagonal with elements given by i.i.d. random phases distributed according to some absolutely continuous measure and plays the role of the random potential. In dimension one, these operators belong to the family of CMV-matrices in the theory of orthogonal polynomials on the unit circle. We implement the method of Aizenman-Molchanov to prove exponential decay of the fractional moments of the Green function for the unitary Anderson model in the following three regimes: In any dimension, throughout the spectrum at large disorder and near the band edges at arbitrary disorder and, in dimension one, throughout the spectrum at arbitrary disorder. We also prove that exponential decay of fractional moments of the Green function implies dynamical localization, which in turn implies spectral localization. These results complete the analogy with the self-adjoint case where dynamical localization is known to be true in the same three regimes.
Guo, Zhenyuan; Yang, Shaofu; Wang, Jun
2016-12-01
This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-05-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.
Patel, Mainak; Rangan, Aaditya
2017-08-07
Infant rats randomly cycle between the sleeping and waking states, which are tightly correlated with the activity of mutually inhibitory brainstem sleep and wake populations. Bouts of sleep and wakefulness are random; from P2-P10, sleep and wake bout lengths are exponentially distributed with increasing means, while during P10-P21, the sleep bout distribution remains exponential while the distribution of wake bouts gradually transforms to power law. The locus coeruleus (LC), via an undeciphered interaction with sleep and wake populations, has been shown experimentally to be responsible for the exponential to power law transition. Concurrently during P10-P21, the LC undergoes striking physiological changes - the LC exhibits strong global 0.3 Hz oscillations up to P10, but the oscillation frequency gradually rises and synchrony diminishes from P10-P21, with oscillations and synchrony vanishing at P21 and beyond. In this work, we construct a biologically plausible Wilson Cowan-style model consisting of the LC along with sleep and wake populations. We show that external noise and strong reciprocal inhibition can lead to switching between sleep and wake populations and exponentially distributed sleep and wake bout durations as during P2-P10, with the parameters of inhibition between the sleep and wake populations controlling mean bout lengths. Furthermore, we show that the changing physiology of the LC from P10-P21, coupled with reciprocal excitation between the LC and wake population, can explain the shift from exponential to power law of the wake bout distribution. To our knowledge, this is the first study that proposes a plausible biological mechanism, which incorporates the known changing physiology of the LC, for tying the developing sleep-wake circuit and its interaction with the LC to the transformation of sleep and wake bout dynamics from P2-P21. Copyright © 2017 Elsevier Ltd. All rights reserved.
Infinite-disorder critical points of models with stretched exponential interactions
NASA Astrophysics Data System (ADS)
Juhász, Róbert
2014-09-01
We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye
2016-10-01
With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).
A study of physician collaborations through social network and exponential random graph
2013-01-01
Background Physician collaboration, which evolves among physicians during the course of providing healthcare services to hospitalised patients, has been seen crucial to effective patient outcomes in healthcare organisations and hospitals. This study aims to explore physician collaborations using measures of social network analysis (SNA) and exponential random graph (ERG) model. Methods Based on the underlying assumption that collaborations evolve among physicians when they visit a common hospitalised patient, this study first proposes an approach to map collaboration network among physicians from the details of their visits to patients. This paper terms this network as physician collaboration network (PCN). Second, SNA measures of degree centralisation, betweenness centralisation and density are used to examine the impact of SNA measures on hospitalisation cost and readmission rate. As a control variable, the impact of patient age on the relation between network measures (i.e. degree centralisation, betweenness centralisation and density) and hospital outcome variables (i.e. hospitalisation cost and readmission rate) are also explored. Finally, ERG models are developed to identify micro-level structural properties of (i) high-cost versus low-cost PCN; and (ii) high-readmission rate versus low-readmission rate PCN. An electronic health insurance claim dataset of a very large Australian health insurance organisation is utilised to construct and explore PCN in this study. Results It is revealed that the density of PCN is positively correlated with hospitalisation cost and readmission rate. In contrast, betweenness centralisation is found negatively correlated with hospitalisation cost and readmission rate. Degree centralisation shows a negative correlation with readmission rate, but does not show any correlation with hospitalisation cost. Patient age does not have any impact for the relation of SNA measures with hospitalisation cost and hospital readmission rate. The 2-star parameter of ERG model has significant impact on hospitalisation cost. Furthermore, it is found that alternative-k-star and alternative-k-two-path parameters of ERG model have impact on readmission rate. Conclusions Collaboration structures among physicians affect hospitalisation cost and hospital readmission rate. The implications of the findings of this study in terms of their potentiality in developing guidelines to improve the performance of collaborative environments among healthcare professionals within healthcare organisations are discussed in this paper. PMID:23803165
Matrix exponential-based closures for the turbulent subgrid-scale stress tensor.
Li, Yi; Chevillard, Laurent; Eyink, Gregory; Meneveau, Charles
2009-01-01
Two approaches for closing the turbulence subgrid-scale stress tensor in terms of matrix exponentials are introduced and compared. The first approach is based on a formal solution of the stress transport equation in which the production terms can be integrated exactly in terms of matrix exponentials. This formal solution of the subgrid-scale stress transport equation is shown to be useful to explore special cases, such as the response to constant velocity gradient, but neglecting pressure-strain correlations and diffusion effects. The second approach is based on an Eulerian-Lagrangian change of variables, combined with the assumption of isotropy for the conditionally averaged Lagrangian velocity gradient tensor and with the recent fluid deformation approximation. It is shown that both approaches lead to the same basic closure in which the stress tensor is expressed as the matrix exponential of the resolved velocity gradient tensor multiplied by its transpose. Short-time expansions of the matrix exponentials are shown to provide an eddy-viscosity term and particular quadratic terms, and thus allow a reinterpretation of traditional eddy-viscosity and nonlinear stress closures. The basic feasibility of the matrix-exponential closure is illustrated by implementing it successfully in large eddy simulation of forced isotropic turbulence. The matrix-exponential closure employs the drastic approximation of entirely omitting the pressure-strain correlation and other nonlinear scrambling terms. But unlike eddy-viscosity closures, the matrix exponential approach provides a simple and local closure that can be derived directly from the stress transport equation with the production term, and using physically motivated assumptions about Lagrangian decorrelation and upstream isotropy.
Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve
2009-09-08
We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
NASA Astrophysics Data System (ADS)
Merler, Stefano
2016-09-01
Characterizing the early growth profile of an epidemic outbreak is key for predicting the likely trajectory of the number of cases and for designing adequate control measures. Epidemic profiles characterized by exponential growth have been widely observed in the past and a grounding theoretical framework for the analysis of infectious disease dynamics was provided by the pioneering work of Kermack and McKendrick [1]. In particular, exponential growth stems from the assumption that pathogens spread in homogeneous mixing populations; that is, individuals of the population mix uniformly and randomly with each other. However, this assumption was readily recognized as highly questionable [2], and sub-exponential profiles of epidemic growth have been observed in a number of epidemic outbreaks, including HIV/AIDS, foot-and-mouth disease, measles and, more recently, Ebola [3,4].
NASA Astrophysics Data System (ADS)
Van Mieghem, P.; van de Bovenkamp, R.
2013-03-01
Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.
Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian
2012-09-01
This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.
A Random Walk Picture of Basketball
NASA Astrophysics Data System (ADS)
Gabel, Alan; Redner, Sidney
2012-02-01
We analyze NBA basketball play-by-play data and found that scoring is well described by a weakly-biased, anti-persistent, continuous-time random walk. The time between successive scoring events follows an exponential distribution, with little memory between events. We account for a wide variety of statistical properties of scoring, such as the distribution of the score difference between opponents and the fraction of game time that one team is in the lead.
Fiber-Type Random Laser Based on a Cylindrical Waveguide with a Disordered Cladding Layer.
Zhang, Wei Li; Zheng, Meng Ya; Ma, Rui; Gong, Chao Yang; Yang, Zhao Ji; Peng, Gang Ding; Rao, Yun Jiang
2016-05-25
This letter reports a fiber-type random laser (RL) which is made from a capillary coated with a disordered layer at its internal surface and filled with a gain (laser dye) solution in the core region. This fiber-type optical structure, with the disordered layer providing randomly scattered light into the gain region and the cylindrical waveguide providing confinement of light, assists the formation of random lasing modes and enables a flexible and efficient way of making random lasers. We found that the RL is sensitive to laser dye concentration in the core region and there exists a fine exponential relationship between the lasing intensity and particle concentration in the gain solution. The proposed structure could be a fine platform of realizing random lasing and random lasing based sensing.
Modarres, Reza; Ouarda, Taha B M J; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre
2014-07-01
Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMAX-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56% of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk.
NASA Astrophysics Data System (ADS)
Modarres, Reza; Ouarda, Taha B. M. J.; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre
2014-07-01
Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMA X-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56 % of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk.
Ludescher, Josef; Bunde, Armin
2014-12-01
We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution P(Q)(r) of the interoccurrence times r between losses below a negative threshold -Q, for fixed mean interoccurrence times R(Q) in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, P(Q)(r)∝1/{[1+(q-1)βr](1/(q-1))}. We propose that the asset- and time-scale-independent analytic form of P(Q)(r) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution P(Q)(r) as well as the autocorrelation C(Q)(s) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q-exponential form of P(Q)(r) and the power-law decay of C(Q)(s).
NASA Astrophysics Data System (ADS)
Ludescher, Josef; Bunde, Armin
2014-12-01
We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution PQ(r ) of the interoccurrence times r between losses below a negative threshold -Q , for fixed mean interoccurrence times RQ in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, PQ(r ) ∝1 /{[1+(q -1 ) β r ] 1 /(q -1 )} . We propose that the asset- and time-scale-independent analytic form of PQ(r ) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution PQ(r ) as well as the autocorrelation CQ(s ) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q -exponential form of PQ(r ) and the power-law decay of CQ(s ) .
Time-dependent breakdown of fiber networks: Uncertainty of lifetime
NASA Astrophysics Data System (ADS)
Mattsson, Amanda; Uesaka, Tetsu
2017-05-01
Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
Lee, Peter N; Fry, John S; Thornton, Alison J
2014-02-01
We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sargsyan, M. Z.; Poghosyan, H. M.
2018-04-01
A dynamical problem for a rectangular strip with variable coefficients of elasticity is solved by an asymptotic method. It is assumed that the strip is orthotropic, the elasticity coefficients are exponential functions of y, and mixed boundary conditions are posed. The solution of the inner problem is obtained using Bessel functions.
Sadhukhan, Debasis; Roy, Sudipto Singha; Rakshit, Debraj; Prabhu, R; Sen De, Aditi; Sen, Ujjwal
2016-01-01
Classical correlation functions of ground states typically decay exponentially and polynomially, respectively, for gapped and gapless short-range quantum spin systems. In such systems, entanglement decays exponentially even at the quantum critical points. However, quantum discord, an information-theoretic quantum correlation measure, survives long lattice distances. We investigate the effects of quenched disorder on quantum correlation lengths of quenched averaged entanglement and quantum discord, in the anisotropic XY and XYZ spin glass and random field chains. We find that there is virtually neither reduction nor enhancement in entanglement length while quantum discord length increases significantly with the introduction of the quenched disorder.
Le modèle stochastique SIS pour une épidémie dans un environnement aléatoire.
Bacaër, Nicolas
2016-10-01
The stochastic SIS epidemic model in a random environment. In a random environment that is a two-state continuous-time Markov chain, the mean time to extinction of the stochastic SIS epidemic model grows in the supercritical case exponentially with respect to the population size if the two states are favorable, and like a power law if one state is favorable while the other is unfavorable.
Applications of an exponential finite difference technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.; Keith, T.G. Jr.
1988-07-01
An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.
NASA Astrophysics Data System (ADS)
Molina, Armando; Govers, Gerard; Poesen, Jean; Van Hemelryck, Hendrik; De Bièvre, Bert; Vanacker, Veerle
2008-06-01
A large spatial variability in sediment yield was observed from small streams in the Ecuadorian Andes. The objective of this study was to analyze the environmental factors controlling these variations in sediment yield in the Paute basin, Ecuador. Sediment yield data were calculated based on sediment volumes accumulated behind checkdams for 37 small catchments. Mean annual specific sediment yield (SSY) shows a large spatial variability and ranges between 26 and 15,100 Mg km - 2 year - 1 . Mean vegetation cover (C, fraction) in the catchment, i.e. the plant cover at or near the surface, exerts a first order control on sediment yield. The fractional vegetation cover alone explains 57% of the observed variance in ln(SSY). The negative exponential relation (SSY = a × e- b C) which was found between vegetation cover and sediment yield at the catchment scale (10 3-10 9 m 2), is very similar to the equations derived from splash, interrill and rill erosion experiments at the plot scale (1-10 3 m 2). This affirms the general character of an exponential decrease of sediment yield with increasing vegetation cover at a wide range of spatial scales, provided the distribution of cover can be considered to be essentially random. Lithology also significantly affects the sediment yield, and explains an additional 23% of the observed variance in ln(SSY). Based on these two catchment parameters, a multiple regression model was built. This empirical regression model already explains more than 75% of the total variance in the mean annual sediment yield. These results highlight the large potential of revegetation programs for controlling sediment yield. They show that a slight increase in the overall fractional vegetation cover of degraded land is likely to have a large effect on sediment production and delivery. Moreover, they point to the importance of detailed surface vegetation data for predicting and modeling sediment production rates.
Simulation and study of small numbers of random events
NASA Technical Reports Server (NTRS)
Shelton, R. D.
1986-01-01
Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.
A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213
NASA Astrophysics Data System (ADS)
Yan, Zhen; Xie, Fu-Guo
2018-03-01
We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.
Exponential Boundary Observers for Pressurized Water Pipe
NASA Astrophysics Data System (ADS)
Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel
2015-11-01
This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.
Statistically significant relational data mining :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less
NASA Astrophysics Data System (ADS)
Ganguly, S.; Lubetzky, E.; Martinelli, F.
2015-05-01
The East process is a 1 d kinetically constrained interacting particle system, introduced in the physics literature in the early 1990s to model liquid-glass transitions. Spectral gap estimates of Aldous and Diaconis in 2002 imply that its mixing time on L sites has order L. We complement that result and show cutoff with an -window. The main ingredient is an analysis of the front of the process (its rightmost zero in the setup where zeros facilitate updates to their right). One expects the front to advance as a biased random walk, whose normal fluctuations would imply cutoff with an -window. The law of the process behind the front plays a crucial role: Blondel showed that it converges to an invariant measure ν, on which very little is known. Here we obtain quantitative bounds on the speed of convergence to ν, finding that it is exponentially fast. We then derive that the increments of the front behave as a stationary mixing sequence of random variables, and a Stein-method based argument of Bolthausen (`82) implies a CLT for the location of the front, yielding the cutoff result. Finally, we supplement these results by a study of analogous kinetically constrained models on trees, again establishing cutoff, yet this time with an O(1)-window.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Partially coherent surface plasmon modes
NASA Astrophysics Data System (ADS)
Niconoff, G. M.; Vara, P. M.; Munoz-Lopez, J.; Juárez-Morales, J. C.; Carbajal-Dominguez, A.
2011-04-01
Elementary long-range plasmon modes are described assuming an exponential dependence of the refractive index in the neighbourhood of the interface dielectric-metal thin film. The study is performed using coupling mode theory. The interference between two long-range plasmon modes generated that way allows the synthesis of surface sinusoidal plasmon modes, which can be considered as completely coherent generalized plasmon modes. These sinusoidal plasmon modes are used for the synthesis of new partially coherent surface plasmon modes, which are obtained by means of an incoherent superposition of sinusoidal plasmon modes where the period of each one is considered as a random variable. The kinds of surface modes generated have an easily tuneable profile controlled by means of the probability density function associated to the period. We show that partially coherent plasmon modes have the remarkable property to control the length of propagation which is a notable feature respect to the completely coherent surface plasmon mode. The numerical simulation for sinusoidal, Bessel, Gaussian and Dark Hollow plasmon modes are presented.
Response kinetics of tethered bacteria to stepwise changes in nutrient concentration.
Chernova, Anna A; Armitage, Judith P; Packer, Helen L; Maini, Philip K
2003-09-01
We examined the changes in swimming behaviour of the bacterium Rhodobacter sphaeroides in response to stepwise changes in a nutrient (propionate), following the pre-stimulus motion, the initial response and the adaptation to the sustained concentration of the chemical. This was carried out by tethering motile cells by their flagella to glass slides and following the rotational behaviour of their cell bodies in response to the nutrient change. Computerised motion analysis was used to analyse the behaviour. Distributions of run and stop times were obtained from rotation data for tethered cells. Exponential and Weibull fits for these distributions, and variability in individual responses are discussed. In terms of parameters derived from the run and stop time distributions, we compare the responses to stepwise changes in the nutrient concentration and the long-term behaviour of 84 cells under 12 propionate concentration levels from 1 nM to 25 mM. We discuss traditional assumptions for the random walk approximation to bacterial swimming and compare them with the observed R. sphaeroides motile behaviour.
Variability of reflectance measurements with sensor altitude and canopy type
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T.; Vanderbilt, V. C.; Pollara, V. J.
1981-01-01
Data were acquired on canopies of mature corn planted in 76 cm rows, mature soybeans planted in 96 cm rows with 71 percent soil cover, and mature soybeans planed in 76 cm rows with 100 percent soil cover. A LANDSAT band radiometer with a 15 degree field of view was used at ten altitudes ranging from 0.2 m to 10 m above the canopy. At each altitude, measurements were taken at 15 cm intervals also a 2.0 m transect perpendicular to the crop row direction. Reflectance data were plotted as a function of altitude and horizontal position to verify that the variance of measurements at low altitudes was attributable to row effects which disappear at higher altitudes where the sensor integrate across several rows. The coefficient of variation of reflectance decreased exponentially as the sensor was elevated. Systematic sampling (at odd multiples of 0.5 times the row spacing interval) required fewer measurements than simple random sampling over row crop canopies.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion
NASA Astrophysics Data System (ADS)
Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin
2018-02-01
Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
Properties of branching exponential flights in bounded domains
NASA Astrophysics Data System (ADS)
Zoia, A.; Dumonteil, E.; Mazzolo, A.
2012-11-01
In a series of recent works, important results have been reported concerning the statistical properties of exponential flights evolving in bounded domains, a widely adopted model for finite-speed transport phenomena (Blanco S. and Fournier R., Europhys. Lett., 61 (2003) 168; Mazzolo A., Europhys. Lett., 68 (2004) 350; Bénichou O. et al., Europhys. Lett., 70 (2005) 42). Motivated by physical and biological systems where random spatial displacements are coupled with Galton-Watson birth-death mechanisms, such as neutron multiplication, diffusion of reproducing bacteria or spread of epidemics, in this letter we extend those results in two directions, via a Feynman-Kac formalism. First, we characterize the occupation statistics of exponential flights in the presence of absorption and branching, and give explicit moment formulas for the total length travelled by the walker and the number of performed collisions in a given domain. Then, we show that the survival and escape probability can be derived as well by resorting to a similar approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of thismore » object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.« less
1/f oscillations in a model of moth populations oriented by diffusive pheromones
NASA Astrophysics Data System (ADS)
Barbosa, L. A.; Martins, M. L.; Lima, E. R.
2005-01-01
An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.
What Randomized Benchmarking Actually Measures
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...
2017-09-28
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Environmental Noise Could Promote Stochastic Local Stability of Behavioral Diversity Evolution
NASA Astrophysics Data System (ADS)
Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi
2018-05-01
In this Letter, we investigate stochastic stability in a two-phenotype evolutionary game model for an infinite, well-mixed population undergoing discrete, nonoverlapping generations. We assume that the fitness of a phenotype is an exponential function of its expected payoff following random pairwise interactions whose outcomes randomly fluctuate with time. We show that the stochastic local stability of a constant interior equilibrium can be promoted by the random environmental noise even if the system may display a complicated nonlinear dynamics. This result provides a new perspective for a better understanding of how environmental fluctuations may contribute to the evolution of behavioral diversity.
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.
2017-01-01
Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560
NASA Technical Reports Server (NTRS)
Balokovic, M.; Paneque, D.; Madejski, G.; Chiang, J.; Furniss, A.; Ajello, M.; Alexander, D. M.; Barret, D.; Blandford, R. D.; Boggs, S. E.;
2016-01-01
We present coordinated multiwavelength observations of the bright, nearby BL Lacertae object Markarian 421 (Mrk 421) taken in 2013 January-March, involving GASP-WEBT, Swift, NuSTAR, Fermi-LAT, MAGIC, VERITAS, and other collaborations and instruments, providing data from radio to very high energy„ (VHE) gamma-ray bands. NuSTAR yielded previously unattainable sensitivity in the 3-79 kiloelectronvolt range, revealing that the spectrum softens when the source is dimmer until the X-ray spectral shape saturates into a steep Gamma approximating 3 power law, with no evidence for an exponential cutoff or additional hard components up to 80 kiloelectronvolts. For the first time, we observed both the synchrotron and the inverse-Compton peaks of the spectral energy distribution (SED) simultaneously shifted to frequencies below the typical quiescent state by an order of magnitude. The fractional variability as a function of photon energy shows a double-bump structure that relates to the two bumps of the broadband SED. In each bump, the variability increases with energy, which, in the framework of the synchrotron self-Compton model, implies that the electrons with higher energies are more variable. The measured multi band variability, the significant X-ray-to-VHE correlation down to some of the lowest fluxes ever observed in both bands, the lack of correlation between optical/UV and X-ray flux, the low degree of polarization and its significant (random) variations, the short estimated electron cooling time, and the significantly longer variability timescale observed in the NuSTAR light curves point toward in situ electron acceleration and suggest that there are multiple compact regions contributing to the broadband emission of Mrk 421 during low-activity states.
Generating variable and random schedules of reinforcement using Microsoft Excel macros.
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation. PMID:26203657
Explicit equilibria in a kinetic model of gambling
NASA Astrophysics Data System (ADS)
Bassetti, F.; Toscani, G.
2010-06-01
We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.
Hamed, Kaveh Akbari; Gregg, Robert D
2016-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059
Do Vampires Exist? Using Spreadsheets To Investigate a Common Folktale.
ERIC Educational Resources Information Center
Drier, Hollylynne Stohl
1999-01-01
Describes the use of spreadsheets in a third grade class to teach basic mathematical concepts by investigating the existence of vampires. Incorporates addition and multiplication skills, patterning, variables, formulas, exponential growth, and proof by contradiction. (LRW)
Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M
2014-05-01
Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Novikov, S V
2018-01-14
Diffusive transport of a particle in a spatially correlated random energy landscape having exponential density of states has been considered. We exactly calculate the diffusivity in the nondispersive quasi-equilibrium transport regime for the 1D transport model and found that for slow decaying correlation functions the diffusivity becomes singular at some particular temperature higher than the temperature of the transition to the true non-equilibrium dispersive transport regime. It means that the diffusion becomes anomalous and does not follow the usual ∝ t 1/2 law. In such situation, the fully developed non-equilibrium regime emerges in two stages: first, at some temperature there is the transition from the normal to anomalous diffusion, and then at lower temperature the average velocity for the infinite medium goes to zero, thus indicating the development of the true dispersive regime. Validity of the Einstein relation is discussed for the situation where the diffusivity does exist. We provide also some arguments in favor of conservation of the major features of the new transition scenario in higher dimensions.
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values. PMID:18595286
Efficient Quantum Pseudorandomness.
Brandão, Fernando G S L; Harrow, Aram W; Horodecki, Michał
2016-04-29
Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g., in computation, communication, and control. Fully random transformations require exponential time for either classical or quantum systems, but in many cases pseudorandom operations can emulate certain properties of truly random ones. Indeed, in the classical realm there is by now a well-developed theory regarding such pseudorandom operations. However, the construction of such objects turns out to be much harder in the quantum case. Here, we show that random quantum unitary time evolutions ("circuits") are a powerful source of quantum pseudorandomness. This gives for the first time a polynomial-time construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography, and understanding the self-equilibration of closed quantum dynamics.
Propagation of exponential shock wave in an axisymmetric rotating non-ideal dusty gas
NASA Astrophysics Data System (ADS)
Nath, G.
2016-09-01
One-dimensional unsteady isothermal and adiabatic flow behind a strong exponential shock wave propagating in a rotational axisymmetric mixture of non-ideal gas and small solid particles, which has variable azimuthal and axial fluid velocities, is analyzed. The shock wave is driven out by a piston moving with time according to exponential law. The azimuthal and axial components of the fluid velocity in the ambient medium are assumed to be varying and obeying exponential laws. In the present work, small solid particles are considered as pseudo-fluid with the assumption that the equilibrium flow-conditions are maintained in the flow-field, and the viscous-stress and heat conduction of the mixture are negligible. Solutions are obtained in both the cases, when the flow between the shock and the piston is isothermal or adiabatic by taking into account the components of vorticity vector and compressibility. It is found that the assumption of zero temperature gradient brings a profound change in the density, axial component of vorticity vector and compressibility distributions as compared to that of the adiabatic case. To investigate the behavior of the flow variables and the influence on the shock wave propagation by the parameter of non-idealness of the gas overline{b} in the mixture as well as by the mass concentration of solid particles in the mixture Kp and by the ratio of the density of solid particles to the initial density of the gas G1 are worked out in detail. It is interesting to note that the shock strength increases with an increase in G1 ; whereas it decreases with an increase in overline{b} . Also, a comparison between the solutions in the cases of isothermal and adiabatic flows is made.
Nath, G; Sahu, P K
2016-01-01
A self-similar model for one-dimensional unsteady isothermal and adiabatic flows behind a strong exponential shock wave driven out by a cylindrical piston moving with time according to an exponential law in an ideal gas in the presence of azimuthal magnetic field and variable density is discussed in a rotating atmosphere. The ambient medium is assumed to possess radial, axial and azimuthal component of fluid velocities. The initial density, the fluid velocities and magnetic field of the ambient medium are assumed to be varying with time according to an exponential law. The gas is taken to be non-viscous having infinite electrical conductivity. Solutions are obtained, in both the cases, when the flow between the shock and the piston is isothermal or adiabatic by taking into account the components of vorticity vector. The effects of the variation of the initial density index, adiabatic exponent of the gas and the Alfven-Mach number on the flow-field behind the shock wave are investigated. It is found that the presence of the magnetic field have decaying effects on the shock wave. Also, it is observed that the effect of an increase in the magnetic field strength is more impressive in the case of adiabatic flow than in the case of isothermal flow. The assumption of zero temperature gradient brings a profound change in the density, non-dimensional azimuthal and axial components of vorticity vector distributions in comparison to those in the case of adiabatic flow. A comparison is made between isothermal and adiabatic flows. It is obtained that an increase in the initial density variation index, adiabatic exponent and strength of the magnetic field decrease the shock strength.
NASA Astrophysics Data System (ADS)
Bajargaan, Ruchi; Patel, Arvind
2018-04-01
One-dimensional unsteady adiabatic flow behind an exponential shock wave propagating in a self-gravitating, rotating, axisymmetric dusty gas with heat conduction and radiation heat flux, which has exponentially varying azimuthal and axial fluid velocities, is investigated. The shock wave is driven out by a piston moving with time according to an exponential law. The dusty gas is taken to be a mixture of a non-ideal gas and small solid particles. The density of the ambient medium is assumed to be constant. The equilibrium flow conditions are maintained and energy is varying exponentially, which is continuously supplied by the piston. The heat conduction is expressed in the terms of Fourier's law, and the radiation is assumed of diffusion type for an optically thick grey gas model. The thermal conductivity and the absorption coefficient are assumed to vary with temperature and density according to a power law. The effects of the variation of heat transfer parameters, gravitation parameter and dusty gas parameters on the shock strength, the distance between the piston and the shock front, and on the flow variables are studied out in detail. It is interesting to note that the similarity solution exists under the constant initial angular velocity, and the shock strength is independent from the self gravitation, heat conduction and radiation heat flux.
Schmidt, Anders S; Lauridsen, Kasper G; Adelborg, Kasper; Torp, Peter; Bach, Leif F; Jepsen, Simon M; Hornung, Nete; Deakin, Charles D; Rickers, Hans; Løfgren, Bo
2017-03-08
Several different defibrillators are currently used for cardioversion and defibrillation of cardiac arrhythmias. The efficacy of a novel pulsed biphasic (PB) waveform has not been compared to other biphasic waveforms. Accordingly, this study aims to compare the efficacy and safety of PB shocks with biphasic truncated exponential (BTE) shocks in patients undergoing cardioversion of atrial fibrillation or -flutter. This prospective, randomized study included patients admitted for elective direct current cardioversion. Patients were randomized to receive cardioversion using either PB or BTE shocks. We used escalating shocks until sinus rhythm was obtained or to a maximum of 4 shocks. Patients randomized to PB shocks received 90, 120, 150, and 200 J and patients randomized to BTE shocks received 100, 150, 200, and 250 J, as recommended by the manufacturers. In total, 69 patients (51%) received PB shocks and 65 patients (49%) BTE shocks. Successful cardioversion, defined as sinus rhythm 4 hours after cardioversion, was achieved in 43 patients (62%) using PB shocks and in 56 patients (86%) using BTE shocks; ratio 1.4 (95% CI 1.1-1.7) ( P =0.002). There was no difference in safety (ie, myocardial injury judged by changes in high-sensitive troponin I levels; ratio 1.1) (95% CI 1.0-1.3), P =0.15. The study was terminated prematurely because of an adverse event. Cardioversion using a BTE waveform was more effective when compared with a PB waveform. There was no difference in safety between the 2 waveforms, as judged by changes in troponin I levels. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02317029. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Decaying two-dimensional turbulence in a circular container.
Schneider, Kai; Farge, Marie
2005-12-09
We present direct numerical simulations of two-dimensional decaying turbulence at initial Reynolds number 5 x 10(4) in a circular container with no-slip boundary conditions. Starting with random initial conditions the flow rapidly exhibits self-organization into coherent vortices. We study their formation and the role of the viscous boundary layer on the production and decay of integral quantities. The no-slip wall produces vortices which are injected into the bulk flow and tend to compensate the enstrophy dissipation. The self-organization of the flow is reflected by the transition of the initially Gaussian vorticity probability density function (PDF) towards a distribution with exponential tails. Because of the presence of coherent vortices the pressure PDF become strongly skewed with exponential tails for negative values.
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
Plasma fluctuations as Markovian noise.
Li, B; Hazeltine, R D; Gentle, K W
2007-12-01
Noise theory is used to study the correlations of stationary Markovian fluctuations that are homogeneous and isotropic in space. The relaxation of the fluctuations is modeled by the diffusion equation. The spatial correlations of random fluctuations are modeled by the exponential decay. Based on these models, the temporal correlations of random fluctuations, such as the correlation function and the power spectrum, are calculated. We find that the diffusion process can give rise to the decay of the correlation function and a broad frequency spectrum of random fluctuations. We also find that the transport coefficients may be estimated by the correlation length and the correlation time. The theoretical results are compared with the observed plasma density fluctuations from the tokamak and helimak experiments.
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Haider, Farwa; Muhammad, Taseer; Alsaedi, Ahmed
2018-03-01
Here Darcy-Forchheimer flow of viscous nanofluid with Brownian motion and thermophoresis is addressed. An incompressible viscous liquid saturates the porous space through Darcy-Forchheimer relation. Flow is generated by an exponentially stretching curved surface. System of partial differential equations is converted into ordinary differential system. Nonlinear systems are solved numerically by NDSolve technique. Graphs are plotted for the outcomes of various pertinent variables. Skin friction coefficient and local Nusselt and Sherwood numbers have been physically interpreted. Our results indicate that the local Nusselt and Sherwood numbers are reduced for larger values of local porosity parameter and Forchheimer number.
Flows in a tube structure: Equation on the graph
NASA Astrophysics Data System (ADS)
Panasenko, Grigory; Pileckas, Konstantin
2014-08-01
The steady-state Navier-Stokes equations in thin structures lead to some elliptic second order equation for the macroscopic pressure on a graph. At the nodes of the graph the pressure satisfies Kirchoff-type junction conditions. In the non-steady case the problem for the macroscopic pressure on the graph becomes nonlocal in time. In the paper we study the existence and uniqueness of a solution to such one-dimensional model on the graph for a pipe-wise network. We also prove the exponential decay of the solution with respect to the time variable in the case when the data decay exponentially with respect to time.
Syed Ali, M; Vadivel, R; Saravanakumar, R
2018-06-01
This study examines the problem of robust reliable control for Takagi-Sugeno (T-S) fuzzy Markovian jumping delayed neural networks with probabilistic actuator faults and leakage terms. An event-triggered communication scheme. First, the randomly occurring actuator faults and their failures rates are governed by two sets of unrelated random variables satisfying certain probabilistic failures of every actuator, new type of distribution based event triggered fault model is proposed, which utilize the effect of transmission delay. Second, Takagi-Sugeno (T-S) fuzzy model is adopted for the neural networks and the randomness of actuators failures is modeled in a Markov jump model framework. Third, to guarantee the considered closed-loop system is exponential mean square stable with a prescribed reliable control performance, a Markov jump event-triggered scheme is designed in this paper, which is the main purpose of our study. Fourth, by constructing appropriate Lyapunov-Krasovskii functional, employing Newton-Leibniz formulation and integral inequalities, several delay-dependent criteria for the solvability of the addressed problem are derived. The obtained stability criteria are stated in terms of linear matrix inequalities (LMIs), which can be checked numerically using the effective LMI toolbox in MATLAB. Finally, numerical examples are given to illustrate the effectiveness and reduced conservatism of the proposed results over the existing ones, among them one example was supported by real-life application of the benchmark problem. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Privacy-Preserving RFID Authentication Using Public Exponent Three RSA Algorithm
NASA Astrophysics Data System (ADS)
Kim, Yoonjeong; Ohm, Seongyong; Yi, Kang
In this letter, we propose a privacy-preserving authentication protocol with RSA cryptosystem in an RFID environment. For both overcoming the resource restriction and strengthening security, our protocol uses only modular exponentiation with exponent three at RFID tag side, with the padded random message whose length is greater than one-sixth of the whole message length.
Rolls, David A.; Wang, Peng; McBryde, Emma; Pattison, Philippa; Robins, Garry
2015-01-01
We compare two broad types of empirically grounded random network models in terms of their abilities to capture both network features and simulated Susceptible-Infected-Recovered (SIR) epidemic dynamics. The types of network models are exponential random graph models (ERGMs) and extensions of the configuration model. We use three kinds of empirical contact networks, chosen to provide both variety and realistic patterns of human contact: a highly clustered network, a bipartite network and a snowball sampled network of a “hidden population”. In the case of the snowball sampled network we present a novel method for fitting an edge-triangle model. In our results, ERGMs consistently capture clustering as well or better than configuration-type models, but the latter models better capture the node degree distribution. Despite the additional computational requirements to fit ERGMs to empirical networks, the use of ERGMs provides only a slight improvement in the ability of the models to recreate epidemic features of the empirical network in simulated SIR epidemics. Generally, SIR epidemic results from using configuration-type models fall between those from a random network model (i.e., an Erdős-Rényi model) and an ERGM. The addition of subgraphs of size four to edge-triangle type models does improve agreement with the empirical network for smaller densities in clustered networks. Additional subgraphs do not make a noticeable difference in our example, although we would expect the ability to model cliques to be helpful for contact networks exhibiting household structure. PMID:26555701
Exponentiated power Lindley distribution.
Ashour, Samir K; Eltehiwy, Mahmoud A
2015-11-01
A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.
Secure uniform random-number extraction via incoherent strategies
NASA Astrophysics Data System (ADS)
Hayashi, Masahito; Zhu, Huangjun
2018-01-01
To guarantee the security of uniform random numbers generated by a quantum random-number generator, we study secure extraction of uniform random numbers when the environment of a given quantum state is controlled by the third party, the eavesdropper. Here we restrict our operations to incoherent strategies that are composed of the measurement on the computational basis and incoherent operations (or incoherence-preserving operations). We show that the maximum secure extraction rate is equal to the relative entropy of coherence. By contrast, the coherence of formation gives the extraction rate when a certain constraint is imposed on the eavesdropper's operations. The condition under which the two extraction rates coincide is then determined. Furthermore, we find that the exponential decreasing rate of the leaked information is characterized by Rényi relative entropies of coherence. These results clarify the power of incoherent strategies in random-number generation, and can be applied to guarantee the quality of random numbers generated by a quantum random-number generator.
A Bayesian, generalized frailty model for comet assays.
Ghebretinsae, Aklilu Habteab; Faes, Christel; Molenberghs, Geert; De Boeck, Marlies; Geys, Helena
2013-05-01
This paper proposes a flexible modeling approach for so-called comet assay data regularly encountered in preclinical research. While such data consist of non-Gaussian outcomes in a multilevel hierarchical structure, traditional analyses typically completely or partly ignore this hierarchical nature by summarizing measurements within a cluster. Non-Gaussian outcomes are often modeled using exponential family models. This is true not only for binary and count data, but also for, example, time-to-event outcomes. Two important reasons for extending this family are for (1) the possible occurrence of overdispersion, meaning that the variability in the data may not be adequately described by the models, which often exhibit a prescribed mean-variance link, and (2) the accommodation of a hierarchical structure in the data, owing to clustering in the data. The first issue is dealt with through so-called overdispersion models. Clustering is often accommodated through the inclusion of random subject-specific effects. Though not always, one conventionally assumes such random effects to be normally distributed. In the case of time-to-event data, one encounters, for example, the gamma frailty model (Duchateau and Janssen, 2007 ). While both of these issues may occur simultaneously, models combining both are uncommon. Molenberghs et al. ( 2010 ) proposed a broad class of generalized linear models accommodating overdispersion and clustering through two separate sets of random effects. Here, we use this method to model data from a comet assay with a three-level hierarchical structure. Although a conjugate gamma random effect is used for the overdispersion random effect, both gamma and normal random effects are considered for the hierarchical random effect. Apart from model formulation, we place emphasis on Bayesian estimation. Our proposed method has an upper hand over the traditional analysis in that it (1) uses the appropriate distribution stipulated in the literature; (2) deals with the complete hierarchical nature; and (3) uses all information instead of summary measures. The fit of the model to the comet assay is compared against the background of more conventional model fits. Results indicate the toxicity of 1,2-dimethylhydrazine dihydrochloride at different dose levels (low, medium, and high).
A scaling law for random walks on networks
Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-01-01
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870
A scaling law for random walks on networks
NASA Astrophysics Data System (ADS)
Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-10-01
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.
A scaling law for random walks on networks.
Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-10-14
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.
Inflation with a graceful exit in a random landscape
NASA Astrophysics Data System (ADS)
Pedro, F. G.; Westphal, A.
2017-03-01
We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.
Computer Modelling and Simulation of Solar PV Array Characteristics
NASA Astrophysics Data System (ADS)
Gautam, Nalin Kumar
2003-02-01
The main objective of my PhD research work was to study the behaviour of inter-connected solar photovoltaic (PV) arrays. The approach involved the construction of mathematical models to investigate different types of research problems related to the energy yield, fault tolerance, efficiency and optimal sizing of inter-connected solar PV array systems. My research work can be divided into four different types of research problems: 1. Modeling of inter-connected solar PV array systems to investigate their electrical behavior, 2. Modeling of different inter-connected solar PV array networks to predict their expected operational lifetimes, 3. Modeling solar radiation estimation and its variability, and 4. Modeling of a coupled system to estimate the size of PV array and battery-bank in the stand-alone inter-connected solar PV system where the solar PV system depends on a system providing solar radiant energy. The successful application of mathematics to the above-m entioned problems entailed three phases: 1. The formulation of the problem in a mathematical form using numerical, optimization, probabilistic and statistical methods / techniques, 2. The translation of mathematical models using C++ to simulate them on a computer, and 3. The interpretation of the results to see how closely they correlated with the real data. Array is the most cost-intensive component of the solar PV system. Since the electrical performances as well as life properties of an array are highly sensitive to field conditions, different characteristics of the arrays, such as energy yield, operational lifetime, collector orientation, and optimal sizing were investigated in order to improve their efficiency, fault-tolerance and reliability. Three solar cell interconnection configurations in the array - series-parallel, total-cross-tied, and bridge-linked, were considered. The electrical characteristics of these configurations were investigated to find out one that is comparatively less susceptible to the mismatches due to manufacturer's tolerances in cell characteristics, shadowing, soiling and aging of solar cells. The current-voltage curves and the values of energy yield characterized by maximum-power points and fill factors for these arrays were also obtained. Two different mathematical models, one for smaller size arrays and the other for the larger size arrays, were developed. The first model takes account of the partial differential equations with boundary value conditions, whereas the second one involves the simple linear programming concept. Based on the initial information on the values of short-circuit current and open-circuit voltage of thirty-six single-crystalline silicon solar cells provided by a manufacturer, the values of these parameters for up to 14,400 solar cells were generated randomly. Thus, the investigations were done for three different cases of array sizes, i.e., (6 x 6), (36 x 8) and (720 x 20), for each configuration. The operational lifetimes of different interconnected solar PV arrays and the improvement in their life properties through different interconnection and modularized configurations were investigated using a reliability-index model. Under normal conditions, the efficiency of a solar cell degrades in an exponential manner, and its operational life above a lowest admissible efficiency may be considered as the upper bound of its lifetime. Under field conditions, the solar cell may fail any time due to environmental stresses, or it may function up to the end of its expected lifetime. In view of this, the lifetime of a solar cell in an array was represented by an exponentially distributed random variable. At any instant of time t, this random variable was considered to have two states: (i) the cell functioned till time t, or (ii) the cell failed within time t. It was considered that the functioning of the solar cell included its operation at an efficiency decaying with time under normal conditions. It was assumed that the lifetime of a solar cell had lack of memory or aging property, which meant that no matter how long (say, t) the cell had been operational, the probability that it would last an additional time ?t was independent of t. The operational life of the solar cell above a lowest admissible efficiency was considered as the upper bound of its expected lifetime. The value of the upper bound on the expected life of solar cell was evaluated using the information provided by the manufacturers of the single-crystalline silicon solar cells. Then on the basis of these lifetimes, the expected operational lifetimes of the array systems were obtained. Since the investigations of the effects of collector orientation on the performance of an array require the continuous values of global solar radiation on a surface, a method to estimate the global solar radiation on a surface (horizontal or tilted) was also proposed. The cloudiness index was defined as the fraction of extraterrestrial radiation that reached the earth's surface when the sky above the location of interest was obscured by the cloud cover. The cloud cover at the location of interest during any time interval of a day was assumed to follow the fuzzy random phenomenon. The cloudiness index, therefore, was considered as a fuzzy random variable that accounted for the cloud cover at the location of interest during any time interval of a day. This variable was assumed to depend on four other fuzzy random variables that, respectively, accounted for the cloud cover corresponding to the 1) type of cloud group, 2) climatic region, 3) season with most of the precipitation, and 4) type of precipitation at the location of interest during any time interval. All possible types of cloud covers were categorized into five types of cloud groups. Each cloud group was considered to be a fuzzy subset. In this model, the cloud cover at the location of interest during a time interval was considered to be the clouds that obscure the sky above the location. The cloud covers, with all possible types of clouds having transmissivities corresponding to values in the membership range of a fuzzy subset (i.e., a type of cloud group), were considered to be the membership elements of that fuzzy subset. The transmissivities of different types of cloud covers in a cloud group corresponded to the values in the membership range of that cloud group. Predicate logic (i.e., if---then---, else---, conditions) was used to set the relationship between all the fuzzy random variables. The values of the above-mentioned fuzzy random variables were evaluated to provide the value of cloudiness index for each time interval at the location of interest. For each case of the fuzzy random variable, heuristic approach was used to identify subjectively the range ([a, b], where a and b were real numbers with in [0, 1] such that a
NASA Astrophysics Data System (ADS)
Du, Zhong; Tian, Bo; Wu, Xiao-Yu; Yuan, Yu-Qiang
2018-05-01
Studied in this paper is a (2+1)-dimensional coupled nonlinear Schrödinger system with variable coefficients, which describes the propagation of an optical beam inside the two-dimensional graded-index waveguide amplifier with the polarization effects. According to the similarity transformation, we derive the type-I and type-II rogue-wave solutions. We graphically present two types of the rouge wave and discuss the influence of the diffraction parameter on the rogue waves. When the diffraction parameters are exponentially-growing-periodic, exponential, linear and quadratic parameters, we obtain the periodic rogue wave and composite rogue waves respectively. Supported by the National Natural Science Foundation of China under Grant Nos. 11772017, 11272023, and 11471050, by the Fund of State Key Laboratory of Information Photonics and Optical Communications (Beijing University of Posts and Telecommunications), China (IPOC: 2017ZZ05) and by the Fundamental Research Funds for the Central Universities of China under Grant No. 2011BUPTYB02.
Optimal savings and the value of population.
Arrow, Kenneth J; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P
2007-11-20
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium.
NASA Technical Reports Server (NTRS)
Tokay, Ali; Petersen, Arthur; Gatlin, Patrick N.; Wingo, Matt; Wolff, David B.; Carey, Lawrence D.
2011-01-01
Dual tipping bucket gauges were operated at 16 sites in support of ground based precipitation measurements during Mid-latitude Continental Convective Clouds Experiment (MC3E). The experiment is conducted in North Central Oklahoma from April 22 through June 6, 2011. The gauge sites were distributed around Atmospheric Radiation Measurement (ARM) Climate Research facility where the minimum and maximum separation distances ranged from 1 to 12 km. This study investigates the rainfall variability by employing the stretched exponential function. It will focus on the quantitative assessment of the partial beam of the experiment area in both convective and stratiform rain. The parameters of the exponential function will also be determined for various events. This study is unique for two reasons. First is the existing gauge setup and the second is the highly convective nature of the events with rain rates well above 100 mm h-1 for 20 minutes. We will compare the findings with previous studies.
NASA Technical Reports Server (NTRS)
Tokay, Ali; Petersen, Walter Arthur; Gatlin, Patrick N.; Wingo, Matt; Wolff, David B.; Carey, Lawrence D.
2011-01-01
Dual tipping bucket gauges were operated at 16 sites in support of ground based precipitation measurements during Mid-latitude Continental Convective Clouds Experiment (MC3E). The experiment is conducted in North Central Oklahoma from April 22 through June 6, 2011. The gauge sites were distributed around Atmospheric Radiation Measurement (ARM) Climate Research facility where the minimum and maximum separation distances ranged from 1 to 12 km. This study investigates the rainfall variability by employing the stretched exponential function. It will focus on the quantitative assessment of the partial beam of the experiment area in both convective and stratiform rain. The parameters of the exponential function will also be determined for various events. This study is unique for two reasons. First is the existing gauge setup and the second is the highly convective nature of the events with rain rates well above 100 mm/h for 20 minutes. We will compare the findings with previous studies.
Optimal savings and the value of population
Arrow, Kenneth J.; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P.
2007-01-01
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium. PMID:17984059
NASA Astrophysics Data System (ADS)
Hsiao, Feng-Hsiag
2016-10-01
In this study, a novel approach via improved genetic algorithm (IGA)-based fuzzy observer is proposed to realise exponential optimal H∞ synchronisation and secure communication in multiple time-delay chaotic (MTDC) systems. First, an original message is inserted into the MTDC system. Then, a neural-network (NN) model is employed to approximate the MTDC system. Next, a linear differential inclusion (LDI) state-space representation is established for the dynamics of the NN model. Based on this LDI state-space representation, this study proposes a delay-dependent exponential stability criterion derived in terms of Lyapunov's direct method, thus ensuring that the trajectories of the slave system approach those of the master system. Subsequently, the stability condition of this criterion is reformulated into a linear matrix inequality (LMI). Due to GA's random global optimisation search capabilities, the lower and upper bounds of the search space can be set so that the GA will seek better fuzzy observer feedback gains, accelerating feedback gain-based synchronisation via the LMI-based approach. IGA, which exhibits better performance than traditional GA, is used to synthesise a fuzzy observer to not only realise the exponential synchronisation, but also achieve optimal H∞ performance by minimizing the disturbance attenuation level and recovering the transmitted message. Finally, a numerical example with simulations is given in order to demonstrate the effectiveness of our approach.
Research on the exponential growth effect on network topology: Theoretical and empirical analysis
NASA Astrophysics Data System (ADS)
Li, Shouwei; You, Zongjun
Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.
Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.
Zheng, Song
2015-09-01
In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-04-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Universal patterns of inequality
NASA Astrophysics Data System (ADS)
Banerjee, Anand; Yakovenko, Victor M.
2010-07-01
Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-06-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
Piecewise exponential survival times and analysis of case-cohort data.
Li, Yan; Gail, Mitchell H; Preston, Dale L; Graubard, Barry I; Lubin, Jay H
2012-06-15
Case-cohort designs select a random sample of a cohort to be used as control with cases arising from the follow-up of the cohort. Analyses of case-cohort studies with time-varying exposures that use Cox partial likelihood methods can be computer intensive. We propose a piecewise-exponential approach where Poisson regression model parameters are estimated from a pseudolikelihood and the corresponding variances are derived by applying Taylor linearization methods that are used in survey research. The proposed approach is evaluated using Monte Carlo simulations. An illustration is provided using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study of male smokers in Finland, where a case-cohort study of serum glucose level and pancreatic cancer was analyzed. Copyright © 2012 John Wiley & Sons, Ltd.
Quantifying patterns of research interest evolution
NASA Astrophysics Data System (ADS)
Jia, Tao; Wang, Dashun; Szymanski, Boleslaw
Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.
Optimal estimation for the satellite attitude using star tracker measurements
NASA Technical Reports Server (NTRS)
Lo, J. T.-H.
1986-01-01
An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.
Small violations of Bell inequalities for multipartite pure random states
NASA Astrophysics Data System (ADS)
Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.
2018-05-01
For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.
Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.
Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I
2008-05-01
We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.
Scalability, Complexity and Reliability in Quantum Information Processing
2007-03-01
hidden subgroup framework to abelian groups which are not finitely generated. An extension of the basic algorithm breaks the Buchmann-Williams...finding short lattice vectors . In [2], we showed that the generalization of the standard method --- random coset state preparation followed by fourier...sampling --- required exponential time for sufficiently non-abelian groups including the symmetric group , at least when the fourier transforms are
Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
The effect of zealots on the rate of consensus achievement in complex networks
NASA Astrophysics Data System (ADS)
Kashisaz, Hadi; Hosseini, S. Samira; Darooneh, Amir H.
2014-05-01
In this study, we investigate the role of zealots on the result of voting process on both scale-free and Watts-Strogatz networks. We observe that inflexible individuals are very effective in consensus achievement and also in the rate of ordering process in complex networks. Zealots make the magnetization of the system to vary exponentially with time. We obtain that on SF networks, increasing the zealots' population, Z, exponentially increases the rate of consensus achievement. The time needed for the system to reach a desired magnetization, shows a power-law dependence on Z. As well, we obtain that the decay time of the order parameter shows a power-law dependence on Z. We also investigate the role of zealots' degree on the rate of ordering process and finally, we analyze the effect of network's randomness on the efficiency of zealots. Moving from a regular to a random network, the re-wiring probability P increases. We show that with increasing P, the efficiency of zealots for reducing the consensus achievement time increases. The rate of consensus is compared with the rate of ordering for different re-wiring probabilities of WS networks.
Random matrix theory filters and currency portfolio optimisation
NASA Astrophysics Data System (ADS)
Daly, J.; Crane, M.; Ruskin, H. J.
2010-04-01
Random matrix theory (RMT) filters have recently been shown to improve the optimisation of financial portfolios. This paper studies the effect of three RMT filters on realised portfolio risk, using bootstrap analysis and out-of-sample testing. We considered the case of a foreign exchange and commodity portfolio, weighted towards foreign exchange, and consisting of 39 assets. This was intended to test the limits of RMT filtering, which is more obviously applicable to portfolios with larger numbers of assets. We considered both equally and exponentially weighted covariance matrices, and observed that, despite the small number of assets involved, RMT filters reduced risk in a way that was consistent with a much larger S&P 500 portfolio. The exponential weightings indicated showed good consistency with the value suggested by Riskmetrics, in contrast to previous results involving stocks. This decay factor, along with the low number of past moves preferred in the filtered, equally weighted case, displayed a trend towards models which were reactive to recent market changes. On testing portfolios with fewer assets, RMT filtering provided less or no overall risk reduction. In particular, no long term out-of-sample risk reduction was observed for a portfolio consisting of 15 major currencies and commodities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy; Rudinger, Kenneth; Young, Kevin
Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less
Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston
2009-01-01
Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...
Large-amplitude jumps and non-Gaussian dynamics in highly concentrated hard sphere fluids.
Saltzman, Erica J; Schweizer, Kenneth S
2008-05-01
Our microscopic stochastic nonlinear Langevin equation theory of activated dynamics has been employed to study the real-space van Hove function of dense hard sphere fluids and suspensions. At very short times, the van Hove function is a narrow Gaussian. At sufficiently high volume fractions, such that the entropic barrier to relaxation is greater than the thermal energy, its functional form evolves with time to include a rapidly decaying component at small displacements and a long-range exponential tail. The "jump" or decay length scale associated with the tail increases with time (or particle root-mean-square displacement) at fixed volume fraction, and with volume fraction at the mean alpha relaxation time. The jump length at the alpha relaxation time is predicted to be proportional to a measure of the decoupling of self-diffusion and structural relaxation. At long times corresponding to mean displacements of order a particle diameter, the volume fraction dependence of the decay length disappears. A good superposition of the exponential tail feature based on the jump length as a scaling variable is predicted at high volume fractions. Overall, the theoretical results are in good accord with recent simulations and experiments. The basic aspects of the theory are also compared with a classic jump model and a dynamically facilitated continuous time random-walk model. Decoupling of the time scales of different parts of the relaxation process predicted by the theory is qualitatively similar to facilitated dynamics models based on the concept of persistence and exchange times if the elementary event is assumed to be associated with transport on a length scale significantly smaller than the particle size.
Elbasha, Elamin H
2005-05-01
The availability of patient-level data from clinical trials has spurred a lot of interest in developing methods for quantifying and presenting uncertainty in cost-effectiveness analysis (CEA). Although the majority has focused on developing methods for using sample data to estimate a confidence interval for an incremental cost-effectiveness ratio (ICER), a small strand of the literature has emphasized the importance of incorporating risk preferences and the trade-off between the mean and the variance of returns to investment in health and medicine (mean-variance analysis). This paper shows how the exponential utility-moment-generating function approach is a natural extension to this branch of the literature for modelling choices from healthcare interventions with uncertain costs and effects. The paper assumes an exponential utility function, which implies constant absolute risk aversion, and is based on the fact that the expected value of this function results in a convenient expression that depends only on the moment-generating function of the random variables. The mean-variance approach is shown to be a special case of this more general framework. The paper characterizes the solution to the resource allocation problem using standard optimization techniques and derives the summary measure researchers need to estimate for each programme, when the assumption of risk neutrality does not hold, and compares it to the standard incremental cost-effectiveness ratio. The importance of choosing the correct distribution of costs and effects and the issues related to estimation of the parameters of the distribution are also discussed. An empirical example to illustrate the methods and concepts is provided. Copyright 2004 John Wiley & Sons, Ltd
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiang; Lu, Yang; Lee, Jongho
2016-01-04
Tuning low resistance state is crucial for resistance random access memory (RRAM) that aims to achieve optimal read margin and design flexibility. By back-to-back stacking two nanometallic bipolar RRAMs with different thickness into a complementary structure, we have found that its low resistance can be reliably tuned over several orders of magnitude. Such high tunability originates from the exponential thickness dependence of the high resistance state of nanometallic RRAM, in which electron wave localization in a random network gives rise to the unique scaling behavior. The complementary nanometallic RRAM provides electroforming-free, multi-resistance-state, sub-100 ns switching capability with advantageous characteristics formore » memory arrays.« less
Systems Characterization of Combustor Instabilities With Controls Design Emphasis
NASA Technical Reports Server (NTRS)
Kopasakis, George
2004-01-01
This effort performed test data analysis in order to characterize the general behavior of combustor instabilities with emphasis on controls design. The analysis is performed on data obtained from two configurations of a laboratory combustor rig and from a developmental aero-engine combustor. The study has characterized several dynamic behaviors associated with combustor instabilities. These are: frequency and phase randomness, amplitude modulations, net random phase walks, random noise, exponential growth and intra-harmonic couplings. Finally, the very cause of combustor instabilities was explored and it could be attributed to a more general source-load type impedance interaction that includes the thermo-acoustic coupling. Performing these characterizations on different combustors allows for more accurate identification of the cause of these phenomena and their effect on instability.
Development of a methodology to evaluate material accountability in pyroprocess
NASA Astrophysics Data System (ADS)
Woo, Seungmin
This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).
Exploring conservative islands using correlated and uncorrelated noise
NASA Astrophysics Data System (ADS)
da Silva, Rafael M.; Manchein, Cesar; Beims, Marcus W.
2018-02-01
In this work, noise is used to analyze the penetration of regular islands in conservative dynamical systems. For this purpose we use the standard map choosing nonlinearity parameters for which a mixed phase space is present. The random variable which simulates noise assumes three distributions, namely equally distributed, normal or Gaussian, and power law (obtained from the same standard map but for other parameters). To investigate the penetration process and explore distinct dynamical behaviors which may occur, we use recurrence time statistics (RTS), Lyapunov exponents and the occupation rate of the phase space. Our main findings are as follows: (i) the standard deviations of the distributions are the most relevant quantity to induce the penetration; (ii) the penetration of islands induce power-law decays in the RTS as a consequence of enhanced trapping; (iii) for the power-law correlated noise an algebraic decay of the RTS is observed, even though sticky motion is absent; and (iv) although strong noise intensities induce an ergodic-like behavior with exponential decays of RTS, the largest Lyapunov exponent is reminiscent of the regular islands.
Saleh, H M; Annuar, M S M; Simarani, K
2017-11-01
Degradation of xanthan polymer in aqueous solution by ultrasonic irradiation was investigated. The effects of selected variables i.e. sonication intensity, irradiation time, concentration of xanthan gum and molar concentration of NaCl in solution were studied. Combined approach of full factorial design and conventional one-factor-at-a-time was applied to obtain optimum degradation at sonication power intensity of 11.5Wcm -2 , irradiation time 120min and 0.1gL -1 xanthan in a salt-free solution. Molecular weight reduction of xanthan gum under sonication was described by an exponential decay function with higher rate constant for polymer degradation in the salt free solution. The limiting molecular weight where fragments no longer undergo scission was determined from the function. The incorporation of NaCl in xanthan solution resulted in a lower limiting molecular weight. The ultrasound-mediated degradation of aqueous xanthan polymer chain agreed with a random scission model. Side chain of xanthan polymer is proposed to be the primary site of scission action. Copyright © 2017 Elsevier B.V. All rights reserved.
Extreme event statistics in a drifting Markov chain
NASA Astrophysics Data System (ADS)
Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur
2017-07-01
We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.
Spatial design and strength of spatial signal: Effects on covariance estimation
Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.
2007-01-01
In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.
Distributed synchronization control of complex networks with communication constraints.
Xu, Zhenhua; Zhang, Dan; Song, Hongbo
2016-11-01
This paper is concerned with the distributed synchronization control of complex networks with communication constraints. In this work, the controllers communicate with each other through the wireless network, acting as a controller network. Due to the constrained transmission power, techniques such as the packet size reduction and transmission rate reduction schemes are proposed which could help reduce communication load of the controller network. The packet dropout problem is also considered in the controller design since it is often encountered in networked control systems. We show that the closed-loop system can be modeled as a switched system with uncertainties and random variables. By resorting to the switched system approach and some stochastic system analysis method, a new sufficient condition is firstly proposed such that the exponential synchronization is guaranteed in the mean-square sense. The controller gains are determined by using the well-known cone complementarity linearization (CCL) algorithm. Finally, a simulation study is performed, which demonstrates the effectiveness of the proposed design algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Gong, Shuqing; Yang, Shaofu; Guo, Zhenyuan; Huang, Tingwen
2018-06-01
The paper is concerned with the synchronization problem of inertial memristive neural networks with time-varying delay. First, by choosing a proper variable substitution, inertial memristive neural networks described by second-order differential equations can be transformed into first-order differential equations. Then, a novel controller with a linear diffusive term and discontinuous sign term is designed. By using the controller, the sufficient conditions for assuring the global exponential synchronization of the derive and response neural networks are derived based on Lyapunov stability theory and some inequality techniques. Finally, several numerical simulations are provided to substantiate the effectiveness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
Morning-to-evening differences in oxygen uptake kinetics in short-duration cycling exercise.
Brisswalter, Jeanick; Bieuzen, François; Giacomoni, Magali; Tricot, Véronique; Falgairette, Guy
2007-01-01
This study analyzed diurnal variations in oxygen (O(2)) uptake kinetics and efficiency during a moderate cycle ergometer exercise. Fourteen physically active diurnally active male subjects (age 23+/-5 yrs) not specifically trained at cycling first completed a test to determine their ventilatory threshold (T(vent)) and maximal oxygen consumption (VO(2max)); one week later, they completed four bouts of testing in the morning and evening in a random order, each separated by at least 24 h. For each period of the day (07:00-08:30 h and 19:00-20:30 h), subjects performed two bouts. Each bout was composed of a 5 min cycling exercise at 45 W, followed after 5 min rest by a 10 min cycling exercise at 80% of the power output associated with T(vent). Gas exchanges were analyzed breath-by-breath and fitted using a mono-exponential function. During moderate exercise, the time constant and amplitude of VO(2) kinetics were significantly higher in the morning compared to the evening. The net efficiency increased from the morning to evening (17.3+/-4 vs. 20.5+/-2%; p<0.05), and the variability of cycling cadence was greater during the morning than evening (+34%; p<0.05). These findings suggest that VO(2) responses are affected by the time of day and could be related to variability in muscle activity pattern.
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan
2016-11-01
Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.
Socio-Economic Instability and the Scaling of Energy Use with Population Size
DeLong, John P.; Burger, Oskar
2015-01-01
The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth) or help to generate new energy (leading to exponential and super-exponential growth). It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system. PMID:26091499
Socio-Economic Instability and the Scaling of Energy Use with Population Size.
DeLong, John P; Burger, Oskar
2015-01-01
The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth) or help to generate new energy (leading to exponential and super-exponential growth). It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system.
USDA-ARS?s Scientific Manuscript database
Soybean aphid (Aphis glycines Matsumura) is a pest of soybean in the northern Midwest whose migratory patterns have been difficult to quantify. Improved knowledge of soybean aphid overwintering sites could facilitate the development of control efforts with exponential impacts on aphid densities on a...
ERIC Educational Resources Information Center
Kong, Nan
2007-01-01
In multivariate statistics, the linear relationship among random variables has been fully explored in the past. This paper looks into the dependence of one group of random variables on another group of random variables using (conditional) entropy. A new measure, called the K-dependence coefficient or dependence coefficient, is defined using…
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Theory of activated glassy dynamics in randomly pinned fluids.
Phan, Anh D; Schweizer, Kenneth S
2018-02-07
We generalize the force-level, microscopic, Nonlinear Langevin Equation (NLE) theory and its elastically collective generalization [elastically collective nonlinear Langevin equation (ECNLE) theory] of activated dynamics in bulk spherical particle liquids to address the influence of random particle pinning on structural relaxation. The simplest neutral confinement model is analyzed for hard spheres where there is no change of the equilibrium pair structure upon particle pinning. As the pinned fraction grows, cage scale dynamical constraints are intensified in a manner that increases with density. This results in the mobile particles becoming more transiently localized, with increases of the jump distance, cage scale barrier, and NLE theory mean hopping time; subtle changes of the dynamic shear modulus are predicted. The results are contrasted with recent simulations. Similarities in relaxation behavior are identified in the dynamic precursor regime, including a roughly exponential, or weakly supra-exponential, growth of the alpha time with pinning fraction and a reduction of dynamic fragility. However, the increase of the alpha time with pinning predicted by the local NLE theory is too small and severely so at very high volume fractions. The strong deviations are argued to be due to the longer range collective elasticity aspect of the problem which is expected to be modified by random pinning in a complex manner. A qualitative physical scenario is offered for how the three distinct aspects that quantify the elastic barrier may change with pinning. ECNLE theory calculations of the alpha time are then presented based on the simplest effective-medium-like treatment for how random pinning modifies the elastic barrier. The results appear to be consistent with most, but not all, trends seen in recent simulations. Key open problems are discussed with regard to both theory and simulation.
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; AliShaykhian, Gholam
2010-01-01
We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.
Theory of activated glassy dynamics in randomly pinned fluids
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Schweizer, Kenneth S.
2018-02-01
We generalize the force-level, microscopic, Nonlinear Langevin Equation (NLE) theory and its elastically collective generalization [elastically collective nonlinear Langevin equation (ECNLE) theory] of activated dynamics in bulk spherical particle liquids to address the influence of random particle pinning on structural relaxation. The simplest neutral confinement model is analyzed for hard spheres where there is no change of the equilibrium pair structure upon particle pinning. As the pinned fraction grows, cage scale dynamical constraints are intensified in a manner that increases with density. This results in the mobile particles becoming more transiently localized, with increases of the jump distance, cage scale barrier, and NLE theory mean hopping time; subtle changes of the dynamic shear modulus are predicted. The results are contrasted with recent simulations. Similarities in relaxation behavior are identified in the dynamic precursor regime, including a roughly exponential, or weakly supra-exponential, growth of the alpha time with pinning fraction and a reduction of dynamic fragility. However, the increase of the alpha time with pinning predicted by the local NLE theory is too small and severely so at very high volume fractions. The strong deviations are argued to be due to the longer range collective elasticity aspect of the problem which is expected to be modified by random pinning in a complex manner. A qualitative physical scenario is offered for how the three distinct aspects that quantify the elastic barrier may change with pinning. ECNLE theory calculations of the alpha time are then presented based on the simplest effective-medium-like treatment for how random pinning modifies the elastic barrier. The results appear to be consistent with most, but not all, trends seen in recent simulations. Key open problems are discussed with regard to both theory and simulation.
NASA Astrophysics Data System (ADS)
Sokolov, Valentin V.; Zhirov, Oleg V.; Kharkov, Yaroslav A.
The extraordinary complexity of classical trajectories of typical nonlinear systems that manifest stochastic behavior is intimately connected with exponential sensitivity to small variations of initial conditions and/or weak external perturbations. In rigorous terms, such classical systems are characterized by positive algorithmic complexity described by the Lyapunov exponent or, alternatively, by the Kolmogorov-Sinai entropy. The said implies that, in spite of the fact that, formally, any however complex trajectory of a perfectly isolated (closed) system is unique and differentiable for any certain initial conditions and the motion is perfectly reversible, it is impractical to treat that sort of classical systems as closed ones. Inevitably, arbitrary weak influence of an environment crucially impacts the dynamics. This influence, that can be considered as a noise, rapidly effaces the memory of initial conditions and turns the motion into an irreversible random process. In striking contrast, the quantum mechanics of the classically chaotic systems exhibit much weaker sensitivity and strong memory of the initial state. Qualitatively, this crucial difference could be expected in view of a much simpler structure of quantum states as compared to the extraordinary complexity of random and unpredictable classical trajectories. However the very notion of trajectories is absent in quantum mechanics so that the concept of exponential instability seems to be irrelevant in this case. The problem of a quantitative measure of complexity of a quantum state of motion, that is a very important and nontrivial issue of the theory of quantum dynamical chaos, is the one of our concern. With such a measure in hand, we quantitatively analyze the stability and reversibility of quantum dynamics in the presence of external noise. To solve this problem we point out that individual classical trajectories are of minor interest if the motion is chaotic. Properties of all of them are alike in this case and rather the behavior of their manifolds carries really valuable information. Therefore the phase-space methods and, correspondingly, the Liouville form of the classical mechanics become the most adequate. It is very important that, opposite to the classical trajectories, the classical phase space distribution and the Liouville equation have direct quantum analogs. Hence, the analogy and difference of classical and quantum dynamics can be traced by comparing the classical (W(c)(I,θ;t)) and quantum (Wigner function W(I,θ;t)) phase space distributions both expressed in identical phase-space variables but ruled by different(!) linear equations. The paramount property of the classical dynamical chaos is the exponentially fast structuring of the system's phase space on finer and finer scales. On the contrary, degree of structuring of the corresponding Wigner function is restricted by the quantization of the phase space. This makes Wigner function more coarse and relatively "simple" as compared to its classical counterpart. Fourier analysis affords quite suitable ground for analyzing complexity of a phase space distribution, that is equally valid in classical and quantum cases. We demonstrate that the typical number of Fourier harmonics is indeed a relevant measure of complexity of states of motion in both classical as well as quantum cases. This allowed us to investigate in detail and introduce a quantitative measure of sensitivity to an external noisy environment and formulate the conditions under which the quantum motion remains reversible. It turns out that while the mean number of harmonics of the classical phase-space distribution of a non-integrable system grows with time exponentially during the whole time of the motion, the time of exponential upgrowth of this number in the case of the corresponding quantum Wigner function is restricted only to the Ehrenfest interval 0 < t < tE - just the interval within which the Wigner function still satisfies the classical Liouville equation. We showed that the number of harmonics increases beyond this interval algebraically. This fact gains a crucial importance when the Ehrenfest time is so short that the exponential regime has no time to show up. Under this condition the quantum motion turns out to be quite stable and reversible.
Geometrical effects on the electron residence time in semiconductor nano-particles.
Koochi, Hakimeh; Ebrahimi, Fatemeh
2014-09-07
We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ(r) in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r(2) model) or through the whole particle (r(3) model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW) simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ(r). It has been observed that by increasing the coordination number n, the average value of electron residence time, τ̅(r) rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ̅(r) is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ̅(r). Our simulations indicate that for volume distribution of traps, τ̅(r) scales as d(2). For a surface distribution of traps τ(r) increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Determinism in synthesized chaotic waveforms.
Corron, Ned J; Blakely, Jonathan N; Hayes, Scott T; Pethel, Shawn D
2008-03-01
The output of a linear filter driven by a randomly polarized square wave, when viewed backward in time, is shown to exhibit determinism at all times when embedded in a three-dimensional state space. Combined with previous results establishing exponential divergence equivalent to a positive Lyapunov exponent, this result rigorously shows that such reverse-time synthesized waveforms appear equally to have been produced by a deterministic chaotic system.
Source-Independent Quantum Random Number Generation
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng
2016-01-01
Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .
Econometrics of exhaustible resource supply: a theory and an application. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epple, D.; Hansen, L.P.
1981-12-01
An econometric model of US oil and natural gas discoveries is developed in this study. The econometric model is explicitly derived as the solution to the problem of maximizing the expected discounted after tax present value of revenues net of exploration, development, and production costs. The model contains equations representing producers' formation of price expectations and separate equations giving producers' optimal exploration decisions contingent on expected prices. A procedure is developed for imposing resource base constraints (e.g., ultimate recovery estimates based on geological analysis) when estimating the econometric model. The model is estimated using aggregate post-war data for the Unitedmore » States. Production from a given addition to proved reserves is assumed to follow a negative exponential path, and additions of proved reserves from a given discovery are assumed to follow a negative exponential path. Annual discoveries of oil and natural gas are estimated as latent variables. These latent variables are the endogenous variables in the econometric model of oil and natural gas discoveries. The model is estimated without resource base constraints. The model is also estimated imposing the mean oil and natural gas ultimate recovery estimates of the US Geological Survey. Simulations through the year 2020 are reported for various future price regimes.« less
Water quality trend analysis for the Karoon River in Iran.
Naddafi, K; Honari, H; Ahmadi, M
2007-11-01
The Karoon River basin, with a basin area of 67,000 km(2), is located in the southern part of Iran. Monthly measurements of the discharge and the water quality variables have been monitored at the Gatvand and Khorramshahr stations of the Karoon River on a monthly basis for the period 1967-2005 and 1969-2005 for Gatvand and Khorramshahr stations, respectively. In this paper the time series of monthly values of water quality parameters and the discharge were analyzed using statistical methods and the existence of trends and the evaluation of the best fitted models were performed. The Kolmogorov-Smirnov test was used to select the theoretical distribution which best fitted the data. Simple regression was used to examine the concentration-time relationships. The concentration-time relationships showed better correlation in Khorramshahr station than that of Gatvand station. The exponential model expresses better concentration - time relationships in Khorramshahr station, but in Gatvand station the logarithmic model is more fitted. The correlation coefficients are positive for all of the variables in Khorramshahr station also in Gatvand station all of the variables are positive except magnesium (Mg2+), bicarbonates (HCO3-) and temporary hardness which shows a decreasing relationship. The logarithmic and the exponential models describe better the concentration-time relationships for two stations.
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
Fractional calculus and morphogen gradient formation
NASA Astrophysics Data System (ADS)
Yuste, Santos Bravo; Abad, Enrique; Lindenberg, Katja
2012-12-01
Some microscopic models for reactive systems where the reaction kinetics is limited by subdiffusion are described by means of reaction-subdiffusion equations where fractional derivatives play a key role. In particular, we consider subdiffusive particles described by means of a Continuous Time Random Walk (CTRW) model subject to a linear (first-order) death process. The resulting fractional equation is employed to study the developmental biology key problem of morphogen gradient formation for the case in which the morphogens are subdiffusive. If the morphogen degradation rate (reactivity) is constant, we find exponentially decreasing stationary concentration profiles, which are similar to the profiles found when the morphogens diffuse normally. However, for the case in which the degradation rate decays exponentially with the distance to the morphogen source, we find that the morphogen profiles are qualitatively different from the profiles obtained when the morphogens diffuse normally.
Hecker, Suzanne; Abrahamson, N.A.; Wooddell, Kathryn
2013-01-01
To investigate the nature of earthquake‐magnitude distributions on faults, we compare the interevent variability of surface displacement at a point on a fault from a composite global data set of paleoseismic observations with the variability expected from two prevailing magnitude–frequency distributions: the truncated‐exponential model and the characteristic‐earthquake model. We use forward modeling to predict the coefficient of variation (CV) for the alternative earthquake distributions, incorporating factors that would effect observations of displacement at a site. The characteristic‐earthquake model (with a characteristic‐magnitude range of ±0.25) produces CV values consistent with the data (CV∼0.5) only if the variability for a given earthquake magnitude is small. This condition implies that rupture patterns on a fault are stable, in keeping with the concept behind the model. This constraint also bears upon fault‐rupture hazard analysis, which, for lack of point‐specific information, has used global scaling relations to infer variability in average displacement for a given‐size earthquake. Exponential distributions of earthquakes (from M 5 to the maximum magnitude) give rise to CV values that are significantly larger than the empirical constraint. A version of the model truncated at M 7, however, yields values consistent with a larger CV (∼0.6) determined for small‐displacement sites. Although this result allows for a difference in the magnitude distribution of smaller surface‐rupturing earthquakes, it may reflect, in part, less stability in the displacement profile of smaller ruptures and/or the tails of larger ruptures.
Contextuality in canonical systems of random variables
NASA Astrophysics Data System (ADS)
Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.
2017-10-01
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.
Blackmail propagation on small-world networks
NASA Astrophysics Data System (ADS)
Shao, Zhi-Gang; Jian-Ping Sang; Zou, Xian-Wu; Tan, Zhi-Jie; Jin, Zhun-Zhi
2005-06-01
The dynamics of the blackmail propagation model based on small-world networks is investigated. It is found that for a given transmitting probability λ the dynamical behavior of blackmail propagation transits from linear growth type to logistical growth one with the network randomness p increases. The transition takes place at the critical network randomness pc=1/N, where N is the total number of nodes in the network. For a given network randomness p the dynamical behavior of blackmail propagation transits from exponential decrease type to logistical growth one with the transmitting probability λ increases. The transition occurs at the critical transmitting probability λc=1/
Systematic Onset of Periodic Patterns in Random Disk Packings
NASA Astrophysics Data System (ADS)
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A. C.
2018-04-01
We report evidence of a surprising systematic onset of periodic patterns in very tall piles of disks deposited randomly between rigid walls. Independently of the pile width, periodic structures are always observed in monodisperse deposits containing up to 1 07 disks. The probability density function of the lengths of disordered transient phases that precede the onset of periodicity displays an approximately exponential tail. These disordered transients may become very large when the channel width grows without bound. For narrow channels, the probability density of finding periodic patterns of a given period displays a series of discrete peaks, which, however, are washed out completely when the channel width grows.
Disorder-induced localization of excitability in an array of coupled lasers
NASA Astrophysics Data System (ADS)
Lamperti, M.; Perego, A. M.
2017-10-01
We report on the localization of excitability induced by disorder in an array of coupled semiconductor lasers with a saturable absorber. Through numerical simulations we show that the exponential localization of excitable waves occurs if a certain critical amount of randomness is present in the coupling coefficients among the lasers. The results presented in this Rapid Communication demonstrate that disorder can induce localization in lattices of excitable nonlinear oscillators, and can be of interest in the study of photonics-based random networks, neuromorphic systems, and, by analogy, in biology, in particular, in the investigation of the collective dynamics of neuronal cell populations.
NASA Astrophysics Data System (ADS)
Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao
2018-03-01
This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.
NASA Astrophysics Data System (ADS)
Zapata Norberto, B.; Morales-Casique, E.; Herrera, G. S.
2017-12-01
Severe land subsidence due to groundwater extraction may occur in multiaquifer systems where highly compressible aquitards are present. The highly compressible nature of the aquitards leads to nonlinear consolidation where the groundwater flow parameters are stress-dependent. The case is further complicated by the heterogeneity of the hydrogeologic and geotechnical properties of the aquitards. We explore the effect of realistic vertical heterogeneity of hydrogeologic and geotechnical parameters on the consolidation of highly compressible aquitards by means of 1-D Monte Carlo numerical simulations. 2000 realizations are generated for each of the following parameters: hydraulic conductivity (K), compression index (Cc) and void ratio (e). The correlation structure, the mean and the variance for each parameter were obtained from a literature review about field studies in the lacustrine sediments of Mexico City. The results indicate that among the parameters considered, random K has the largest effect on the ensemble average behavior of the system. Random K leads to the largest variance (and therefore largest uncertainty) of total settlement, groundwater flux and time to reach steady state conditions. We further propose a data assimilation scheme by means of ensemble Kalman filter to estimate the ensemble mean distribution of K, pore-pressure and total settlement. We consider the case where pore-pressure measurements are available at given time intervals. We test our approach by generating a 1-D realization of K with exponential spatial correlation, and solving the nonlinear flow and consolidation problem. These results are taken as our "true" solution. We take pore-pressure "measurements" at different times from this "true" solution. The ensemble Kalman filter method is then employed to estimate ensemble mean distribution of K, pore-pressure and total settlement based on the sequential assimilation of these pore-pressure measurements. The ensemble-mean estimates from this procedure closely approximate those from the "true" solution. This procedure can be easily extended to other random variables such as compression index and void ratio.
Reducing financial avalanches by random investments
NASA Astrophysics Data System (ADS)
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
Reducing financial avalanches by random investments.
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
An invariance property of generalized Pearson random walks in bounded geometries
NASA Astrophysics Data System (ADS)
Mazzolo, Alain
2009-03-01
Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.
NASA Astrophysics Data System (ADS)
Malicet, Dominique
2017-12-01
In this paper, we study random walks {g_n=f_{n-1}\\ldots f_0} on the group Homeo ( S 1) of the homeomorphisms of the circle, where the homeomorphisms f k are chosen randomly, independently, with respect to a same probability measure {ν}. We prove that under the only condition that there is no probability measure invariant by {ν}-almost every homeomorphism, the random walk almost surely contracts small intervals. It generalizes what has been known on this subject until now, since various conditions on {ν} were imposed in order to get the phenomenon of contractions. Moreover, we obtain the surprising fact that the rate of contraction is exponential, even in the lack of assumptions of smoothness on the f k 's. We deduce various dynamical consequences on the random walk ( g n ): finiteness of ergodic stationary measures, distribution of the trajectories, asymptotic law of the evaluations, etc. The proof of the main result is based on a modification of the Ávila-Viana's invariance principle, working for continuous cocycles on a space fibred in circles.
ERIC Educational Resources Information Center
Rast, Philippe
2011-01-01
The present study aimed at modeling individual differences in a verbal learning task by means of a latent structured growth curve approach based on an exponential function that yielded 3 parameters: initial recall, learning rate, and asymptotic performance. Three cognitive variables--speed of information processing, verbal knowledge, working…
Optimizing habitat location for black-tailed prairie dogs in southwestern South Dakota
John Hof; Michael Bevers; Daniel W. Uresk; Gregory L. Schenbeck
2002-01-01
A spatial optimization model was formulated and used to maximize black-tailed prairie dog populations in the Badlands National Park and the Buffalo Gap National Grassland in South Dakota. The choice variables involved the strategic placement of limited additional protected habitat. Population dynamics were captured in formulations that reflected exponential population...
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
Tripathi, Dharmendra; Pandey, S K; Siddiqui, Abdul; Bég, O Anwar
2014-01-01
A theoretical study is presented for transient peristaltic flow of an incompressible fluid with variable viscosity in a finite length cylindrical tube as a simulation of transport in physiological vessels and biomimetic peristaltic pumps. The current axisymmetric analysis is qualitatively similar to two-dimensional analysis but exhibits quantitative variations. The current analysis is motivated towards further elucidating the physiological migration of gastric suspensions (food bolus) in the human digestive system. It also applies to variable viscosity industrial fluid (waste) peristaltic pumping systems. First, an axisymmetric model is analysed in the limit of large wavelength ([Formula: see text]) and low Reynolds number ([Formula: see text]) for axial velocity, radial velocity, pressure, hydromechanical efficiency and stream function in terms of radial vibration of the wall ([Formula: see text]), amplitude of the wave ([Formula: see text]), averaged flow rate ([Formula: see text]) and variable viscosity ([Formula: see text]). Subsequently, the peristaltic flow of a fluid with an exponential viscosity model is examined, which is based on the analytical solutions for pressure, wall shear stress, hydromechanical efficiency and streamline patterns in the finite length tube. The results are found to correlate well with earlier studies using a constant viscosity formulation. This study reveals some important features in the flow characteristics including the observation that pressure as well as both number and size of lower trapped bolus increases. Furthermore, the study indicates that hydromechanical efficiency reduces with increasing magnitude of viscosity parameter.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
On the identification of Dragon Kings among extreme-valued outliers
NASA Astrophysics Data System (ADS)
Riva, M.; Neuman, S. P.; Guadagnini, A.
2013-07-01
Extreme values of earth, environmental, ecological, physical, biological, financial and other variables often form outliers to heavy tails of empirical frequency distributions. Quite commonly such tails are approximated by stretched exponential, log-normal or power functions. Recently there has been an interest in distinguishing between extreme-valued outliers that belong to the parent population of most data in a sample and those that do not. The first type, called Gray Swans by Nassim Nicholas Taleb (often confused in the literature with Taleb's totally unknowable Black Swans), is drawn from a known distribution of the tails which can thus be extrapolated beyond the range of sampled values. However, the magnitudes and/or space-time locations of unsampled Gray Swans cannot be foretold. The second type of extreme-valued outliers, termed Dragon Kings by Didier Sornette, may in his view be sometimes predicted based on how other data in the sample behave. This intriguing prospect has recently motivated some authors to propose statistical tests capable of identifying Dragon Kings in a given random sample. Here we apply three such tests to log air permeability data measured on the faces of a Berea sandstone block and to synthetic data generated in a manner statistically consistent with these measurements. We interpret the measurements to be, and generate synthetic data that are, samples from α-stable sub-Gaussian random fields subordinated to truncated fractional Gaussian noise (tfGn). All these data have frequency distributions characterized by power-law tails with extreme-valued outliers about the tail edges.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte
2008-04-01
The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso
2016-01-01
ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-02-15
Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.
NASA Astrophysics Data System (ADS)
Wang, X.; Tu, C. Y.; He, J.; Wang, L.
2017-12-01
It has been a longstanding debate on what the nature of Elsässer variables z- observed in the Alfvénic solar wind is. It is widely believed that z- represents inward propagating Alfvén waves and undergoes non-linear interaction with z+ to produce energy cascade. However, z- variations sometimes show nature of convective structures. Here we present a new data analysis on z- autocorrelation functions to get some definite information on its nature. We find that there is usually a break point on the z- auto-correlation function when the fluctuations show nearly pure Alfvénicity. The break point observed by Helios-2 spacecraft near 0.3 AU is at the first time lag ( 81 s), where the autocorrelation coefficient has the value less than that at zero-time lag by a factor of more than 0.4. The autocorrelation function breaks also appear in the WIND observations near 1 AU. The z- autocorrelation function is separated by the break into two parts: fast decreasing part and slowly decreasing part, which cannot be described in a whole by an exponential formula. The breaks in the z- autocorrelation function may represent that the z- time series are composed of high-frequency white noise and low-frequency apparent structures, which correspond to the flat and steep parts of the function, respectively. This explanation is supported by a simple test with a superposition of an artificial random data series and a smoothed random data series. Since in many cases z- autocorrelation functions do not decrease very quickly at large time lag and cannot be considered as the Lanczos type, no reliable value for correlation-time can be derived. Our results showed that in these cases with high Alfvénicity, z- should not be considered as inward-propagating wave. The power-law spectrum of z+ should be made by fluid turbulence cascade process presented by Kolmogorov.
On parametric Gevrey asymptotics for some nonlinear initial value Cauchy problems
NASA Astrophysics Data System (ADS)
Lastra, A.; Malek, S.
2015-11-01
We study a nonlinear initial value Cauchy problem depending upon a complex perturbation parameter ɛ with vanishing initial data at complex time t = 0 and whose coefficients depend analytically on (ɛ, t) near the origin in C2 and are bounded holomorphic on some horizontal strip in C w.r.t. the space variable. This problem is assumed to be non-Kowalevskian in time t, therefore analytic solutions at t = 0 cannot be expected in general. Nevertheless, we are able to construct a family of actual holomorphic solutions defined on a common bounded open sector with vertex at 0 in time and on the given strip above in space, when the complex parameter ɛ belongs to a suitably chosen set of open bounded sectors whose union form a covering of some neighborhood Ω of 0 in C*. These solutions are achieved by means of Laplace and Fourier inverse transforms of some common ɛ-depending function on C × R, analytic near the origin and with exponential growth on some unbounded sectors with appropriate bisecting directions in the first variable and exponential decay in the second, when the perturbation parameter belongs to Ω. Moreover, these solutions satisfy the remarkable property that the difference between any two of them is exponentially flat for some integer order w.r.t. ɛ. With the help of the classical Ramis-Sibuya theorem, we obtain the existence of a formal series (generally divergent) in ɛ which is the common Gevrey asymptotic expansion of the built up actual solutions considered above.
Students' Misconceptions about Random Variables
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2012-01-01
This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)
Genetic attack on neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
NASA Astrophysics Data System (ADS)
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-07-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA
2014-01-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less
Self Organized Criticality as a new paradigm of sleep regulation
NASA Astrophysics Data System (ADS)
Ivanov, Plamen Ch.; Bartsch, Ronny P.
2012-02-01
Humans and animals often exhibit brief awakenings from sleep (arousals), which are traditionally viewed as random disruptions of sleep caused by external stimuli or pathologic perturbations. However, our recent findings show that arousals exhibit complex temporal organization and scale-invariant behavior, characterized by a power-law probability distribution for their durations, while sleep stage durations exhibit exponential behavior. The co-existence of both scale-invariant and exponential processes generated by a single regulatory mechanism has not been observed in physiological systems until now. Such co-existence resembles the dynamical features of non-equilibrium systems exhibiting self-organized criticality (SOC). Our empirical analysis and modeling approaches based on modern concepts from statistical physics indicate that arousals are an integral part of sleep regulation and may be necessary to maintain and regulate healthy sleep by releasing accumulated excitations in the regulatory neuronal networks, following a SOC-type temporal organization.
Genetic attack on neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka
2006-03-15
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold formore » the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.« less
Genetic attack on neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
A coherent discrete variable representation method on a sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hua -Gen
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
A coherent discrete variable representation method on a sphere
Yu, Hua -Gen
2017-09-05
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
A Multivariate Randomization Text of Association Applied to Cognitive Test Results
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Beard, Bettina
2009-01-01
Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.
Incidence of the Bertillon and Gompertz effects on the outcome of clinical trials
NASA Astrophysics Data System (ADS)
Roehner, Bertrand M.
2014-11-01
The accounts of medical trials provide very detailed information about the patients’ health conditions. On the contrary, almost no vital data such as marital status or age distribution are usually given. Yet, some of these factors can have a notable impact on the overall death rate, thereby changing the outcome and conclusions of the trial. This paper focuses on two of these variables. The first is marital status; its effect on life expectancy (which will be referred to as the Bertillon effect) may double death rates in all age intervals. The second variable is the age distribution of the oldest patients. Because of the exponential nature of Gompertz’s law changes in the distribution of ages in the oldest age group can have dramatic consequences on the overall number of deaths. One should recall that the death rate at the age of 82 is 40 times higher than at the age of 37. It will be seen that randomization alone can hardly take care of these problems. Appropriate remedies are easy to formulate however. First, the marital status of patients as well as the age distribution of those over 65 should be documented for both study groups. Then, thanks to these data and based on the Bertillon and Gompertz laws, it will become possible to perform appropriate corrections. Such corrections will notably improve the reliability and accuracy of the conclusions, especially in trials which include a large proportion of elderly subjects.
A kinetic approach to some quasi-linear laws of macroeconomics
NASA Astrophysics Data System (ADS)
Gligor, M.; Ignat, M.
2002-11-01
Some previous works have presented the data on wealth and income distributions in developed countries and have found that the great majority of population is described by an exponential distribution, which results in idea that the kinetic approach could be adequate to describe this empirical evidence. The aim of our paper is to extend this framework by developing a systematic kinetic approach of the socio-economic systems and to explain how linear laws, modelling correlations between macroeconomic variables, may arise in this context. Firstly we construct the Boltzmann kinetic equation for an idealised system composed by many individuals (workers, officers, business men, etc.), each of them getting a certain income and spending money for their needs. To each individual a certain time variable amount of money is associated this meaning him/her phase space coordinate. In this way the exponential distribution of money in a closed economy is explicitly found. The extension of this result, including states near the equilibrium, give us the possibility to take into account the regular increase of the total amount of money, according to the modern economic theories. The Kubo-Green-Onsager linear response theory leads us to a set of linear equations between some macroeconomic variables. Finally, the validity of such laws is discussed in relation with the time reversal symmetry and is tested empirically using some macroeconomic time series.
Current-limiting and ultrafast system for the characterization of resistive random access memories.
Diaz-Fortuny, J; Maestro, M; Martin-Martinez, J; Crespo-Yepes, A; Rodriguez, R; Nafria, M; Aymerich, X
2016-06-01
A new system for the ultrafast characterization of resistive switching phenomenon is developed to acquire the current during the Set and Reset process in a microsecond time scale. A new electronic circuit has been developed as a part of the main setup system, which is capable of (i) applying a hardware current limit ranging from nanoampers up to miliampers and (ii) converting the Set and Reset exponential gate current range into an equivalent linear voltage. The complete system setup allows measuring with a microsecond resolution. Some examples demonstrate that, with the developed setup, an in-depth analysis of resistive switching phenomenon and random telegraph noise can be made.
NASA Astrophysics Data System (ADS)
Albeverio, Sergio; Tamura, Hiroshi
2018-04-01
We consider a model describing the coupling of a vector-valued and a scalar homogeneous Markovian random field over R4, interpreted as expressing the interaction between a charged scalar quantum field coupled with a nonlinear quantized electromagnetic field. Expectations of functionals of the random fields are expressed by Brownian bridges. Using this, together with Feynman-Kac-Itô type formulae and estimates on the small time and large time behaviour of Brownian functionals, we prove asymptotic upper and lower bounds on the kernel of the transition semigroup for our model. The upper bound gives faster than exponential decay for large distances of the corresponding resolvent (propagator).
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
Hodge, Ian M
2005-09-22
A distribution of activation energies is introduced into the nonlinear Adam-Gibbs ("Hodge-Scherer") phenomenology for structural relaxation. The resulting dependencies of the stretched exponential beta parameter on thermodynamic temperature and fictive temperature (nonlinear thermorheological complexity) are derived. No additional adjustable parameters are introduced, and contact is made with the predictions of the random first-order transition theory of aging of Lubchenko and Wolynes [J. Chem. Physics121, 2852 (2004)].
2016-06-22
this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
NASA Astrophysics Data System (ADS)
Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.
2017-03-01
Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.
Asymptotic laws for random knot diagrams
NASA Astrophysics Data System (ADS)
Chapman, Harrison
2017-06-01
We study random knotting by considering knot and link diagrams as decorated, (rooted) topological maps on spheres and pulling them uniformly from among sets of a given number of vertices n, as first established in recent work with Cantarella and Mastin. The knot diagram model is an exciting new model which captures both the random geometry of space curve models of knotting as well as the ease of computing invariants from diagrams. We prove that unknot diagrams are asymptotically exponentially rare, an analogue of Sumners and Whittington’s landmark result for self-avoiding polygons. Our proof uses the same key idea: we first show that knot diagrams obey a pattern theorem, which describes their fractal structure. We examine how quickly this behavior occurs in practice. As a consequence, almost all diagrams are asymmetric, simplifying sampling from this model. We conclude with experimental data on knotting in this model. This model of random knotting is similar to those studied by Diao et al, and Dunfield et al.
NASA Astrophysics Data System (ADS)
Kwon, Sungchul; Kim, Jin Min
2015-01-01
For a fixed-energy (FE) Manna sandpile model in one dimension, we investigate the effects of random initial conditions on the dynamical scaling behavior of an order parameter. In the FE Manna model, the density ρ of total particles is conserved, and an absorbing phase transition occurs at ρc as ρ varies. In this work, we show that, for a given ρ , random initial distributions of particles lead to the domain structure in which domains with particle densities higher and lower than ρc alternate with each other. In the domain structure, the dominant length scale is the average domain length, which increases via the coalescence of adjacent domains. At ρc, the domain structure slows down the decay of an order parameter and also causes anomalous finite-size effects, i.e., power-law decay followed by an exponential one before the quasisteady state. As a result, the interplay of particle conservation and random initial conditions causes the domain structure, which is the origin of the anomalous dynamical scaling behaviors for random initial conditions.
Higher-order phase transitions on financial markets
NASA Astrophysics Data System (ADS)
Kasprzak, A.; Kutner, R.; Perelló, J.; Masoliver, J.
2010-08-01
Statistical and thermodynamic properties of the anomalous multifractal structure of random interevent (or intertransaction) times were thoroughly studied by using the extended continuous-time random walk (CTRW) formalism of Montroll, Weiss, Scher, and Lax. Although this formalism is quite general (and can be applied to any interhuman communication with nontrivial priority), we consider it in the context of a financial market where heterogeneous agent activities can occur within a wide spectrum of time scales. As the main general consequence, we found (by additionally using the Saddle-Point Approximation) the scaling or power-dependent form of the partition function, Z(q'). It diverges for any negative scaling powers q' (which justifies the name anomalous) while for positive ones it shows the scaling with the general exponent τ(q'). This exponent is the nonanalytic (singular) or noninteger power of q', which is one of the pilar of higher-order phase transitions. In definition of the partition function we used the pausing-time distribution (PTD) as the central one, which takes the form of convolution (or superstatistics used, e.g. for describing turbulence as well as the financial market). Its integral kernel is given by the stretched exponential distribution (often used in disordered systems). This kernel extends both the exponential distribution assumed in the original version of the CTRW formalism (for description of the transient photocurrent measured in amorphous glassy material) as well as the Gaussian one sometimes used in this context (e.g. for diffusion of hydrogen in amorphous metals or for aging effects in glasses). Our most important finding is the third- and higher-order phase transitions, which can be roughly interpreted as transitions between the phase where high frequency trading is most visible and the phase defined by low frequency trading. The specific order of the phase transition directly depends upon the shape exponent α defining the stretched exponential integral kernel. On this basis a simple practical hint for investors was formulated.
Optimal search strategies of space-time coupled random walkers with finite lifetimes
NASA Astrophysics Data System (ADS)
Campos, D.; Abad, E.; Méndez, V.; Yuste, S. B.; Lindenberg, K.
2015-05-01
We present a simple paradigm for detection of an immobile target by a space-time coupled random walker with a finite lifetime. The motion of the walker is characterized by linear displacements at a fixed speed and exponentially distributed duration, interrupted by random changes in the direction of motion and resumption of motion in the new direction with the same speed. We call these walkers "mortal creepers." A mortal creeper may die at any time during its motion according to an exponential decay law characterized by a finite mean death rate ωm. While still alive, the creeper has a finite mean frequency ω of change of the direction of motion. In particular, we consider the efficiency of the target search process, characterized by the probability that the creeper will eventually detect the target. Analytic results confirmed by numerical results show that there is an ωm-dependent optimal frequency ω =ωopt that maximizes the probability of eventual target detection. We work primarily in one-dimensional (d =1 ) domains and examine the role of initial conditions and of finite domain sizes. Numerical results in d =2 domains confirm the existence of an optimal frequency of change of direction, thereby suggesting that the observed effects are robust to changes in dimensionality. In the d =1 case, explicit expressions for the probability of target detection in the long time limit are given. In the case of an infinite domain, we compute the detection probability for arbitrary times and study its early- and late-time behavior. We further consider the survival probability of the target in the presence of many independent creepers beginning their motion at the same location and at the same time. We also consider a version of the standard "target problem" in which many creepers start at random locations at the same time.
Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach
NASA Astrophysics Data System (ADS)
Thomas, C.; Lark, R. M.
2013-12-01
Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second (spherical) model, it cuts off at a temporal range. Having fitted the model, multiple realisations were generated; the random effects were simulated by specifying a covariance matrix for the simulated values, with the estimated parameters. The Cholesky factorisation of the covariance matrix was computed and realizations of the random component of the model generated by pre-multiplying a vector of iid standard Gaussian variables by the lower triangular factor. The resulting random variate was added to the mean value computed from the fixed effects, and the result back-transformed to the original scale of the measurement. Realistic simulations result from approach described above. Background exploratory data analysis was undertaken on 20-day sets of 30-minute buoy data, selected from days 5-24 of months January, April, July, October, 2011, to elucidate daily to weekly variations, and to keep numerical analysis tractable computationally. Work remains to be undertaken to develop suitable models for synthetic directional data. We suggest that the general principles of the method will have applications in other geomorphological modelling endeavours requiring time series of stochastically variable environmental parameters.
Stretched exponential dynamics of coupled logistic maps on a small-world network
NASA Astrophysics Data System (ADS)
Mahajan, Ashwini V.; Gade, Prashant M.
2018-02-01
We investigate the dynamic phase transition from partially or fully arrested state to spatiotemporal chaos in coupled logistic maps on a small-world network. Persistence of local variables in a coarse grained sense acts as an excellent order parameter to study this transition. We investigate the phase diagram by varying coupling strength and small-world rewiring probability p of nonlocal connections. The persistent region is a compact region bounded by two critical lines where band-merging crisis occurs. On one critical line, the persistent sites shows a nonexponential (stretched exponential) decay for all p while for another one, it shows crossover from nonexponential to exponential behavior as p → 1 . With an effectively antiferromagnetic coupling, coupling to two neighbors on either side leads to exchange frustration. Apart from exchange frustration, non-bipartite topology and nonlocal couplings in a small-world network could be a reason for anomalous relaxation. The distribution of trap times in asymptotic regime has a long tail as well. The dependence of temporal evolution of persistence on initial conditions is studied and a scaling form for persistence after waiting time is proposed. We present a simple possible model for this behavior.
Explaining mortality rate plateaus
Weitz, Joshua S.; Fraser, Hunter B.
2001-01-01
We propose a stochastic model of aging to explain deviations from exponential growth in mortality rates commonly observed in empirical studies. Mortality rate plateaus are explained as a generic consequence of considering death in terms of first passage times for processes undergoing a random walk with drift. Simulations of populations with age-dependent distributions of viabilities agree with a wide array of experimental results. The influence of cohort size is well accounted for by the stochastic nature of the model. PMID:11752476
Dotov, D G; Bayard, S; Cochen de Cock, V; Geny, C; Driss, V; Garrigue, G; Bardy, B; Dalla Bella, S
2017-01-01
Rhythmic auditory cueing improves certain gait symptoms of Parkinson's disease (PD). Cues are typically stimuli or beats with a fixed inter-beat interval. We show that isochronous cueing has an unwanted side-effect in that it exacerbates one of the motor symptoms characteristic of advanced PD. Whereas the parameters of the stride cycle of healthy walkers and early patients possess a persistent correlation in time, or long-range correlation (LRC), isochronous cueing renders stride-to-stride variability random. Random stride cycle variability is also associated with reduced gait stability and lack of flexibility. To investigate how to prevent patients from acquiring a random stride cycle pattern, we tested rhythmic cueing which mimics the properties of variability found in healthy gait (biological variability). PD patients (n=19) and age-matched healthy participants (n=19) walked with three rhythmic cueing stimuli: isochronous, with random variability, and with biological variability (LRC). Synchronization was not instructed. The persistent correlation in gait was preserved only with stimuli with biological variability, equally for patients and controls (p's<0.05). In contrast, cueing with isochronous or randomly varying inter-stimulus/beat intervals removed the LRC in the stride cycle. Notably, the individual's tendency to synchronize steps with beats determined the amount of negative effects of isochronous and random cues (p's<0.05) but not the positive effect of biological variability. Stimulus variability and patients' propensity to synchronize play a critical role in fostering healthier gait dynamics during cueing. The beneficial effects of biological variability provide useful guidelines for improving existing cueing treatments. Copyright © 2016 Elsevier B.V. All rights reserved.
Vacuum statistics and stability in axionic landscapes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masoumi, Ali; Vilenkin, Alexander, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu
2016-03-01
We investigate vacuum statistics and stability in random axionic landscapes. For this purpose we developed an algorithm for a quick evaluation of the tunneling action, which in most cases is accurate within 10%. We find that stability of a vacuum is strongly correlated with its energy density, with lifetime rapidly growing as the energy density is decreased. On the other hand, the probability P(B) for a vacuum to have a tunneling action B greater than a given value declines as a slow power law in B. This is in sharp contrast with the studies of random quartic potentials, which foundmore » a fast exponential decline of P(B). Our results suggest that the total number of relatively stable vacua (say, with B>100) grows exponentially with the number of fields N and can get extremely large for N∼> 100. The problem with this kind of model is that the stable vacua are concentrated near the absolute minimum of the potential, so the observed value of the cosmological constant cannot be explained without fine-tuning. To address this difficulty, we consider a modification of the model, where the axions acquire a quadratic mass term, due to their mixing with 4-form fields. This results in a larger landscape with a much broader distribution of vacuum energies. The number of relatively stable vacua in such models can still be extremely large.« less
Path statistics, memory, and coarse-graining of continuous-time random walks on networks
Kion-Crosby, Willow; Morozov, Alexandre V.
2015-01-01
Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mkhitaryan, V. V.; Dobrovitski, V. V.
2015-08-24
The hyperfine coupling between the spin of a charge carrier and the nuclear spin bath is a predominant channel for the carrier spin relaxation in many organic semiconductors. We theoretically investigate the hyperfine-induced spin relaxation of a carrier performing a random walk on a d-dimensional regular lattice, in a transport regime typical for organic semiconductors. We show that in d=1 and 2, the time dependence of the space-integrated spin polarization P(t) is dominated by a superexponential decay, crossing over to a stretched-exponential tail at long times. The faster decay is attributed to multiple self-intersections (returns) of the random-walk trajectories, whichmore » occur more often in lower dimensions. We also show, analytically and numerically, that the returns lead to sensitivity of P(t) to external electric and magnetic fields, and this sensitivity strongly depends on dimensionality of the system (d=1 versus d=3). We investigate in detail the coordinate dependence of the time-integrated spin polarization σ(r), which can be probed in the spin-transport experiments with spin-polarized electrodes. We also demonstrate that, while σ(r) is essentially exponential, the effect of multiple self-intersections can be identified in transport measurements from the strong dependence of the spin-decay length on the external magnetic and electric fields.« less
Variable mass pendulum behaviour processed by wavelet analysis
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Magazù, S.
2017-01-01
The present work highlights how, in order to characterize the motion of a variable mass pendulum, wavelet analysis can be an effective tool in furnishing information on the time evolution of the oscillation spectral content. In particular, the wavelet transform is applied to process the motion of a hung funnel that loses fine sand at an exponential rate; it is shown how, in contrast to the Fourier transform which furnishes only an average frequency value for the motion, the wavelet approach makes it possible to perform a joint time-frequency analysis. The work is addressed at undergraduate and graduate students.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
An instrumental variable random-coefficients model for binary outcomes
Chesher, Andrew; Rosen, Adam M
2014-01-01
In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048
Polynomial chaos expansion with random and fuzzy variables
NASA Astrophysics Data System (ADS)
Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.
2016-06-01
A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.
The statistical mechanics of relativistic orbits around a massive black hole
NASA Astrophysics Data System (ADS)
Bar-Or, Ben; Alexander, Tal
2014-12-01
Stars around a massive black hole (MBH) move on nearly fixed Keplerian orbits, in a centrally-dominated potential. The random fluctuations of the discrete stellar background cause small potential perturbations, which accelerate the evolution of orbital angular momentum by resonant relaxation. This drives many phenomena near MBHs, such as extreme mass-ratio gravitational wave inspirals, the warping of accretion disks, and the formation of exotic stellar populations. We present here a formal statistical mechanics framework to analyze such systems, where the background potential is described as a correlated Gaussian noise. We derive the leading order, phase-averaged 3D stochastic Hamiltonian equations of motion, for evolving the orbital elements of a test star, and obtain the effective Fokker-Planck equation for a general correlated Gaussian noise, for evolving the stellar distribution function. We show that the evolution of angular momentum depends critically on the temporal smoothness of the background potential fluctuations. Smooth noise has a maximal variability frequency {{ν }max }. We show that in the presence of such noise, the evolution of the normalized angular momentum j=\\sqrt{1-{{e}2}} of a relativistic test star, undergoing Schwarzschild (in-plane) general relativistic precession with frequency {{ν }GR}/{{j}2}, is exponentially suppressed for j\\lt {{j}b}, where {{ν }GR}/jb2˜ {{ν }max }, due to the adiabatic invariance of the precession against the slowly varying random background torques. This results in an effective Schwarzschild precession-induced barrier in angular momentum. When jb is large enough, this barrier can have significant dynamical implications for processes near the MBH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less
Universality and Thouless energy in the supersymmetric Sachdev-Ye-Kitaev model
NASA Astrophysics Data System (ADS)
García-García, Antonio M.; Jia, Yiyang; Verbaarschot, Jacobus J. M.
2018-05-01
We investigate the supersymmetric Sachdev-Ye-Kitaev (SYK) model, N Majorana fermions with infinite range interactions in 0 +1 dimensions. We have found that, close to the ground state E ≈0 , discrete symmetries alter qualitatively the spectral properties with respect to the non-supersymmetric SYK model. The average spectral density at finite N , which we compute analytically and numerically, grows exponentially with N for E ≈0 . However the chiral condensate, which is normalized with respect the total number of eigenvalues, vanishes in the thermodynamic limit. Slightly above E ≈0 , the spectral density grows exponentially with the energy. Deep in the quantum regime, corresponding to the first O (N ) eigenvalues, the average spectral density is universal and well described by random matrix ensembles with chiral and superconducting discrete symmetries. The dynamics for E ≈0 is investigated by level fluctuations. Also in this case we find excellent agreement with the prediction of chiral and superconducting random matrix ensembles for eigenvalue separations smaller than the Thouless energy, which seems to scale linearly with N . Deviations beyond the Thouless energy, which describes how ergodicity is approached, are universally characterized by a quadratic growth of the number variance. In the time domain, we have found analytically that the spectral form factor g (t ), obtained from the connected two-level correlation function of the unfolded spectrum, decays as 1 /t2 for times shorter but comparable to the Thouless time with g (0 ) related to the coefficient of the quadratic growth of the number variance. Our results provide further support that quantum black holes are ergodic and therefore can be classified by random matrix theory.
The influence of meteorological variables on CO2 and CH4 trends recorded at a semi-natural station.
Pérez, Isidro A; Sánchez, M Luisa; García, M Ángeles; Pardo, Nuria; Fernández-Duque, Beatriz
2018-03-01
CO 2 and CH 4 evolution is usually linked with sources, sinks and their changes. However, this study highlights the role of meteorological variables. It aims to quantify their contribution to the trend of these greenhouse gases and to determine which contribute most. Six years of measurements at a semi-natural site in northern Spain were considered. Three sections are established: the first focuses on monthly deciles, the second explores the relationship between pairs of meteorological variables, and the third investigates the relationship between meteorological variables and changes in CO 2 and CH 4 . In the first section, monthly outliers were more marked for CO 2 than for CH 4 . The evolution of monthly deciles was fitted to three simple expressions, linear, quadratic and exponential. The linear and exponential are similar, whereas the quadratic evolution is the most flexible since it provided a variable rate of concentration change and a better fit. With this last evolution, a decrease in the change rate was observed for low CO 2 deciles, whereas an increasing change rate prevailed for the rest and was more accentuated for CH 4 . In the second section, meteorological variables were provided by a trajectory model. Backward trajectories from 1-day prior to reaching the measurement site were used to calculate distance and direction averages as well as the recirculation factor. Terciles of these variables were determined in order to establish three intervals with low, medium and high values. These intervals were used to classify the variables following their interval widths and skewnesses. The best correlation between pairs of meteorological variables was observed for the average distance, in particular with horizontal wind speed. Sinusoidal relationships with the average direction were obtained for average distance and for vertical wind speed. Finally, in the third section, the quadratic evolution was considered in each interval of all the meteorological variables. As regards the main result, the greatest increases were obtained for high potential temperature for both gases followed by low and medium boundary layer height for CO 2 and CH 4 , respectively. Combining both meteorological variables provided increases of 22 ± 9 and 0.070 ± 0.019 ppm for CO 2 and CH 4 , respectively, although the number of observations affected is small, around 7%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
Exponentially Weighted Moving Average Change Detection Around the Country (and the World)
NASA Astrophysics Data System (ADS)
Brooks, E.; Wynne, R. H.; Thomas, V. A.; Blinn, C. E.; Coulston, J.
2014-12-01
With continuous, freely available moderate-resolution imagery of the Earth's surface available, and with the promise of more imagery to come, change detection based on continuous process models continues to be a major area of research. One such method, exponentially weighted moving average change detection (EWMACD), is based on a mixture of harmonic regression (HR) and statistical quality control, a branch of statistics commonly used to detect aberrations in industrial and medical processes. By using HR to approximate per-pixel seasonal curves, the resulting residuals characterize information about the pixels which stands outside of the periodic structure imposed by HR. Under stable pixels, these residuals behave as might be expected, but in the presence of changes (growth, stress, removal), the residuals clearly show these changes when they are used as inputs into an EWMA chart. In prior work in Alabama, USA, EWMACD yielded an overall accuracy of 85% on a random sample of known thinned stands, in some cases detecting thinnings as sparse as 25% removal. It was also shown to correctly identify the timing of the thinning activity, typically within a single image date of the change. The net result of the algorithm was to produce date-by-date maps of afforestation and deforestation on a variable scale of severity. In other research, EWMACD has also been applied to detect land use and land cover changes in central Java, Indonesia, despite the heavy incidence of clouds and a monsoonal climate. Preliminary results show that EWMACD accurately identifies land use conversion (agricultural to residential, for example) and also identifies neighborhoods where the building density has increased, removing neighborhood vegetation. In both cases, initial results indicate the potential utility of EWMACD to detect both gross and subtle ecosystem disturbance, but further testing across a range of ecosystems and disturbances is clearly warranted.
2015-01-07
vector that helps to manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original...measures of risk. They view a random variable of interest in concert with an auxiliary random vector that helps to manage , predict and mitigate the risk
Raw and Central Moments of Binomial Random Variables via Stirling Numbers
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…
Monte Carlo Sampling in Fractal Landscapes
NASA Astrophysics Data System (ADS)
Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.
2013-05-01
We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
Coherent wave transmission in quasi-one-dimensional systems with Lévy disorder
NASA Astrophysics Data System (ADS)
Amanatidis, Ilias; Kleftogiannis, Ioannis; Falceto, Fernando; Gopar, Víctor A.
2017-12-01
We study the random fluctuations of the transmission in disordered quasi-one-dimensional systems such as disordered waveguides and/or quantum wires whose random configurations of disorder are characterized by density distributions with a long tail known as Lévy distributions. The presence of Lévy disorder leads to large fluctuations of the transmission and anomalous localization, in relation to the standard exponential localization (Anderson localization). We calculate the complete distribution of the transmission fluctuations for a different number of transmission channels in the presence and absence of time-reversal symmetry. Significant differences in the transmission statistics between disordered systems with Anderson and anomalous localizations are revealed. The theoretical predictions are independently confirmed by tight-binding numerical simulations.
NASA Astrophysics Data System (ADS)
Dong, Siqun; Zhao, Dianli
2018-01-01
This paper studies the subcritical, near-critical and supercritical asymptotic behavior of a reversible random coagulation-fragmentation polymerization process as N → ∞, with the number of distinct ways to form a k-clusters from k units satisfying f(k) =(1 + o (1)) cr-ke-kαk-β, where 0 < α < 1 and β > 0. When the cluster size is small, its distribution is proved to converge to the Gaussian distribution. For the medium clusters, its distribution will converge to Poisson distribution in supercritical stage, and no large clusters exist in this stage. Furthermore, the largest length of polymers of size N is of order ln N in the subcritical stage under α ⩽ 1 / 2.
A Random Variable Related to the Inversion Vector of a Partial Random Permutation
ERIC Educational Resources Information Center
Laghate, Kavita; Deshpande, M. N.
2005-01-01
In this article, we define the inversion vector of a permutation of the integers 1, 2,..., n. We set up a particular kind of permutation, called a partial random permutation. The sum of the elements of the inversion vector of such a permutation is a random variable of interest.
How to decompose arbitrary continuous-variable quantum operations.
Sefi, Seckin; van Loock, Peter
2011-10-21
We present a general, systematic, and efficient method for decomposing any given exponential operator of bosonic mode operators, describing an arbitrary multimode Hamiltonian evolution, into a set of universal unitary gates. Although our approach is mainly oriented towards continuous-variable quantum computation, it may be used more generally whenever quantum states are to be transformed deterministically, e.g., in quantum control, discrete-variable quantum computation, or Hamiltonian simulation. We illustrate our scheme by presenting decompositions for various nonlinear Hamiltonians including quartic Kerr interactions. Finally, we conclude with two potential experiments utilizing offline-prepared optical cubic states and homodyne detections, in which quantum information is processed optically or in an atomic memory using quadratic light-atom interactions. © 2011 American Physical Society
A Geometrical Framework for Covariance Matrices of Continuous and Categorical Variables
ERIC Educational Resources Information Center
Vernizzi, Graziano; Nakai, Miki
2015-01-01
It is well known that a categorical random variable can be represented geometrically by a simplex. Accordingly, several measures of association between categorical variables have been proposed and discussed in the literature. Moreover, the standard definitions of covariance and correlation coefficient for continuous random variables have been…
Quantum simulation of quantum field theory using continuous variables
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...
2015-12-14
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantum simulation of quantum field theory using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Xu, F.; Diner, D. J.
2017-12-01
Two perennial problems in applied theoretical and computational radiative transfer (RT) are: (1) the impact of unresolved spatial variability on large-scale fluxes (in climate models) or radiances (in remote sensing); and (2) efficient-yet-accurate estimation of broadband spectral integrals in radiant energy budget estimation as well as in remote sensing, in particular, of trace gases.Generalized RT (GRT) is a modification of classic RT in an optical medium with uniform extinction where Beer's exponential law for direct transmission is replaced by a monotonically decreasing function with a slower power-law decay. In a convenient parameterized version of GRT, mean extinction replaces the uniform value and just one new property is introduced. As a non-dimensional metric for the unresolved variability, we use the square of the mean extinction coefficient divided by its variance. This parameter is also the exponent of the power-law tail of the modified transmission law.This specific form of sub-exponential transmission has explored for almost two decades in application to spatial variability in the presence of long-range correlations, much like in turbulent media such as clouds, with a focus on multiple scattering. It has also been proposed by Conley and Collins (JQSRT, 112, 1525-, 2011) to improve on the standard (weak-line) implementation of the correlated-k technique for efficient spectral integration.We have merged these two applications within a rigorous formulation of the combined problem, and solve the new integral RT equations in the single-scattering limit. The result is illustrated by addressing practical problems in multi-angle remote sensing of aerosols using the O2 A-band, an emerging methodology for passive profiling of coarse aerosols and clouds.
On the Existence of Step-To-Step Breakpoint Transitions in Accelerated Sprinting
McGhie, David; Danielsen, Jørgen; Sandbakk, Øyvind; Haugen, Thomas
2016-01-01
Accelerated running is characterised by a continuous change of kinematics from one step to the next. It has been argued that breakpoints in the step-to-step transitions may occur, and that these breakpoints are an essential characteristic of dynamics during accelerated running. We examined this notion by comparing a continuous exponential curve fit (indicating continuity, i.e., smooth transitions) with linear piecewise fitting (indicating breakpoint). We recorded the kinematics of 24 well trained sprinters during a 25 m sprint run with start from competition starting blocks. Kinematic data were collected for 24 anatomical landmarks in 3D, and the location of centre of mass (CoM) was calculated from this data set. The step-to-step development of seven variables (four related to CoM position, and ground contact time, aerial time and step length) were analysed by curve fitting. In most individual sprints (in total, 41 sprints were successfully recorded) no breakpoints were identified for the variables investigated. However, for the mean results (i.e., the mean curve for all athletes) breakpoints were identified for the development of vertical CoM position, angle of acceleration and distance between support surface and CoM. It must be noted that for these variables the exponential fit showed high correlations (r2>0.99). No relationship was found between the occurrences of breakpoints for different variables as investigated using odds ratios (Mantel-Haenszel Chi-square statistic). It is concluded that although breakpoints regularly appear during accelerated running, these are not the rule and thereby unlikely a fundamental characteristic, but more likely an expression of imperfection of performance. PMID:27467387
NASA Astrophysics Data System (ADS)
Horikawa, Yo
2013-12-01
Transient patterns in a bistable ring of bidirectionally coupled sigmoidal neurons were studied. When the system had a pair of spatially uniform steady solutions, the instability of unstable spatially nonuniform steady solutions decreased exponentially with the number of neurons because of the symmetry of the system. As a result, transient spatially nonuniform patterns showed dynamical metastability: Their duration increased exponentially with the number of neurons and the duration of randomly generated patterns obeyed a power-law distribution. However, these metastable dynamical patterns were easily stabilized in the presence of small variations in coupling strength. Metastable rotating waves and their pinning in the presence of asymmetry in the direction of coupling and the disappearance of metastable dynamical patterns due to asymmetry in the output function of a neuron were also examined. Further, in a two-dimensional array of neurons with nearest-neighbor coupling, intrinsically one-dimensional patterns were dominant in transients, and self-excitation in these neurons affected the metastable dynamical patterns.
Mutant number distribution in an exponentially growing population
NASA Astrophysics Data System (ADS)
Keller, Peter; Antal, Tibor
2015-01-01
We present an explicit solution to a classic model of cell-population growth introduced by Luria and Delbrück (1943 Genetics 28 491-511) 70 years ago to study the emergence of mutations in bacterial populations. In this model a wild-type population is assumed to grow exponentially in a deterministic fashion. Proportional to the wild-type population size, mutants arrive randomly and initiate new sub-populations of mutants that grow stochastically according to a supercritical birth and death process. We give an exact expression for the generating function of the total number of mutants at a given wild-type population size. We present a simple expression for the probability of finding no mutants, and a recursion formula for the probability of finding a given number of mutants. In the ‘large population-small mutation’ limit we recover recent results of Kessler and Levine (2014 J. Stat. Phys. doi:10.1007/s10955-014-1143-3) for a fully stochastic version of the process.
Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut
2017-01-01
We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .
NASA Astrophysics Data System (ADS)
Bourne, S. J.; Oates, S. J.
2017-12-01
Measurements of the strains and earthquakes induced by fluid extraction from a subsurface reservoir reveal a transient, exponential-like increase in seismicity relative to the volume of fluids extracted. If the frictional strength of these reactivating faults is heterogeneously and randomly distributed, then progressive failures of the weakest fault patches account in a general manner for this initial exponential-like trend. Allowing for the observable elastic and geometric heterogeneity of the reservoir, the spatiotemporal evolution of induced seismicity over 5 years is predictable without significant bias using a statistical physics model of poroelastic reservoir deformations inducing extreme threshold frictional failures of previously inactive faults. This model is used to forecast the temporal and spatial probability density of earthquakes within the Groningen natural gas reservoir, conditional on future gas production plans. Probabilistic seismic hazard and risk assessments based on these forecasts inform the current gas production policy and building strengthening plans.
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Frees, Edward W.; Kim, Jee-Seon
2006-01-01
Multilevel models are proven tools in social research for modeling complex, hierarchical systems. In multilevel modeling, statistical inference is based largely on quantification of random variables. This paper distinguishes among three types of random variables in multilevel modeling--model disturbances, random coefficients, and future response…
Chess players' fame versus their merit.
Simkin, M V; Roychowdhury, V P
2015-12-12
We investigate a pool of international chess title holders born between 1901 and 1943. Using Elo ratings, we compute for every player his expected score in a game with a randomly selected player from the pool. We use this figure as the player's merit. We measure players' fame as the number of Google hits. The correlation between fame and merit is 0.38. At the same time, the correlation between the logarithm of fame and merit is 0.61. This suggests that fame grows exponentially with merit.
Persistence of opinion in the Sznajd consensus model: computer simulation
NASA Astrophysics Data System (ADS)
Stauffer, D.; de Oliveira, P. M. C.
2002-12-01
The density of never changed opinions during the Sznajd consensus-finding process decays with time t as 1/t^θ. We find θ simeq 3/8 for a chain, compatible with the exact Ising result of Derrida et al. In higher dimensions, however, the exponent differs from the Ising θ. With simultaneous updating of sublattices instead of the usual random sequential updating, the number of persistent opinions decays roughly exponentially. Some of the simulations used multi-spin coding.
Listing All Maximal Cliques in Sparse Graphs in Near-optimal Time
2011-01-01
523 10 Arabisopsis thaliana 1745 3098 71 12 Drosophila melanogaster 7282 24894 176 12 Homo Sapiens 9527 31182 308 12 Schizosaccharomyces pombe 2031...clusters of actors [6,14,28,40] and may be used as features in exponential random graph models for statistical analysis of social networks [17,19,20,44,49...29. R. Horaud and T. Skordas. Stereo correspondence through feature grouping and maximal cliques. IEEE Trans. Patt. An. Mach. Int. 11(11):1168–1180
Lassota, P; Melamed, M R; Darzynkiewicz, Z
The binding sites for mitoxantrone (MIT), Ametantrone (AMT), doxorubicin (DOX), actinomycin D (AMD) and ethidium bromide (EB) in nuclei from exponentially growing and differentiating human promyelocytic HL-60 and lymphocytic leukemic MOLT-4 cells were studied by gel electrophoresis of proteins selectively released during titration of these nuclei with the drugs. Each drug at different drug: DNA binding ratios resulted in a characteristic pattern of protein elution and/or retention. For example, in nuclei from exponentially growing HL-60 cells, MIT affected 44 nuclear proteins that were different from those affected by EB; of these 29 were progressively released at increasing MIT:DNA ratios, 11 were transiently released (i.e. only at a low MIT:DNA ratio) and 4 entrapped. Patterns of proteins displaced from nuclei of exponentially growing HL-60 cells differed from those of cells undergoing myeloid differentiation as well as from those of exponentially growing MOLT-4 cells. The first effects were seen at a binding density of approximately one drug molecule per 10-50 base pairs of DNA. The observed selective displacement of proteins may reflect drug-altered affinity of the binding sites for those proteins, for example due to a change of nucleic acid or protein conformation upon binding the ligand. The data show that the binding site(s) for each of the ligands studied is different and the differences correlate with variability in chemical structure between the ligands. The nature of the drug-affected proteins may provide clues regarding antitumor or cytotoxic mechanisms of drug action.
Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.
Joo, Harry; Aguinis, Herman; Bradley, Kyle J
2017-07-01
We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Geometrical effects on the electron residence time in semiconductor nano-particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koochi, Hakimeh; Ebrahimi, Fatemeh, E-mail: f-ebrahimi@birjand.ac.ir; Solar Energy Research Group, University of Birjand, Birjand
2014-09-07
We have used random walk (RW) numerical simulations to investigate the influence of the geometry on the statistics of the electron residence time τ{sub r} in a trap-limited diffusion process through semiconductor nano-particles. This is an important parameter in coarse-grained modeling of charge carrier transport in nano-structured semiconductor films. The traps have been distributed randomly on the surface (r{sup 2} model) or through the whole particle (r{sup 3} model) with a specified density. The trap energies have been taken from an exponential distribution and the traps release time is assumed to be a stochastic variable. We have carried out (RW)more » simulations to study the effect of coordination number, the spatial arrangement of the neighbors and the size of nano-particles on the statistics of τ{sub r}. It has been observed that by increasing the coordination number n, the average value of electron residence time, τ{sup ¯}{sub r} rapidly decreases to an asymptotic value. For a fixed coordination number n, the electron's mean residence time does not depend on the neighbors' spatial arrangement. In other words, τ{sup ¯}{sub r} is a porosity-dependence, local parameter which generally varies remarkably from site to site, unless we are dealing with highly ordered structures. We have also examined the effect of nano-particle size d on the statistical behavior of τ{sup ¯}{sub r}. Our simulations indicate that for volume distribution of traps, τ{sup ¯}{sub r} scales as d{sup 2}. For a surface distribution of traps τ{sup ¯}{sub r} increases almost linearly with d. This leads to the prediction of a linear dependence of the diffusion coefficient D on the particle size d in ordered structures or random structures above the critical concentration which is in accordance with experimental observations.« less
Lopez-Alonso, Virginia; Liew, Sook-Lei; Fernández Del Olmo, Miguel; Cheeran, Binith; Sandrini, Marco; Abe, Mitsunari; Cohen, Leonardo G
2018-01-01
Non-invasive brain stimulation (NIBS) has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1) on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task) and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI). We compared anodal transcranial direct current stimulation (tDCS), paired associative stimulation (PAS 25 ), and intermittent theta burst stimulation (iTBS), along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants ( n = 28) were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online), 1 day after training (consolidation), and 1 week after training (retention). We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning.
NASA Astrophysics Data System (ADS)
Siu-Siu, Guo; Qingxuan, Shi
2017-03-01
In this paper, single-degree-of-freedom (SDOF) systems combined to Gaussian white noise and Gaussian/non-Gaussian colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations with four-coupled first-order differential equations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions, especially the ones in the tail regions of the PDFs. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis. Hopefully, our present work could provide insights into the investigation of structures under random loadings.
Lopez-Alonso, Virginia; Liew, Sook-Lei; Fernández del Olmo, Miguel; Cheeran, Binith; Sandrini, Marco; Abe, Mitsunari; Cohen, Leonardo G.
2018-01-01
Non-invasive brain stimulation (NIBS) has been widely explored as a way to safely modulate brain activity and alter human performance for nearly three decades. Research using NIBS has grown exponentially within the last decade with promising results across a variety of clinical and healthy populations. However, recent work has shown high inter-individual variability and a lack of reproducibility of previous results. Here, we conducted a small preliminary study to explore the effects of three of the most commonly used excitatory NIBS paradigms over the primary motor cortex (M1) on motor learning (Sequential Visuomotor Isometric Pinch Force Tracking Task) and secondarily relate changes in motor learning to changes in cortical excitability (MEP amplitude and SICI). We compared anodal transcranial direct current stimulation (tDCS), paired associative stimulation (PAS25), and intermittent theta burst stimulation (iTBS), along with a sham tDCS control condition. Stimulation was applied prior to motor learning. Participants (n = 28) were randomized into one of the four groups and were trained on a skilled motor task. Motor learning was measured immediately after training (online), 1 day after training (consolidation), and 1 week after training (retention). We did not find consistent differential effects on motor learning or cortical excitability across groups. Within the boundaries of our small sample sizes, we then assessed effect sizes across the NIBS groups that could help power future studies. These results, which require replication with larger samples, are consistent with previous reports of small and variable effect sizes of these interventions on motor learning. PMID:29740271
Periodicity and stability for variable-time impulsive neural networks.
Li, Hongfei; Li, Chuandong; Huang, Tingwen
2017-10-01
The paper considers a general neural networks model with variable-time impulses. It is shown that each solution of the system intersects with every discontinuous surface exactly once via several new well-proposed assumptions. Moreover, based on the comparison principle, this paper shows that neural networks with variable-time impulse can be reduced to the corresponding neural network with fixed-time impulses under well-selected conditions. Meanwhile, the fixed-time impulsive systems can be regarded as the comparison system of the variable-time impulsive neural networks. Furthermore, a series of sufficient criteria are derived to ensure the existence and global exponential stability of periodic solution of variable-time impulsive neural networks, and to illustrate the same stability properties between variable-time impulsive neural networks and the fixed-time ones. The new criteria are established by applying Schaefer's fixed point theorem combined with the use of inequality technique. Finally, a numerical example is presented to show the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
Shehla, Romana; Khan, Athar Ali
2016-01-01
Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.
Early Changes in the Ultrastructure of Streptococcus faecalis After Amino Acid Starvation
Higgins, M. L.; Shockman, G. D.
1970-01-01
Thin sections of Streptococcus faecalis (ATCC 9790) starved of one essential amino acid (threonine or valine) initially show rapid increases in (i) cell wall thickness, (ii) the apparent size of the central nucleoid region, and (iii) mesosomal membranes. The most rapid increases in all three variables occurred during the first 1 to 2 hr of starvation. After this initial period, the rates progressively decreased over the 20-hr observation period. During threonine starvation, the mesosomal membrane that accumulated in the first hour was subsequently degraded and reached a level similar to that found in exponential-phase cells after 20 hr. With valine starvation, mesosomal membrane continued to slowly accumulate over the entire 20-hr observation period. The mesosomes of the starved cells retained the same “stalked-bag” morphology of those in exponential-phase cells. These cytological observations agree with previously published biochemical data on membrane lipid and wall content after starvation. Images PMID:4987306
Hamed, Kaveh Akbari; Gregg, Robert D
2017-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and [Formula: see text] robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and H2 robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:28959117
NASA Astrophysics Data System (ADS)
Iadanzaa, Carla; Rianna, Maura; Orlando, Dario; Ubertini, Lucio; Napolitano, Francesco
2013-10-01
The aim of the paper is the identification of rain events that trigger landslides through the use of an exponential method to separate stochastic independent events. This activity is carried out within the definition of empirical rainfall thresholds for debris flows and shallow landslides. The study area is the Trento district, which is located in the northeast zone of an Alpine area. The work evaluates the factors that affect the variability in space and time of the critical duration of each rain gauge, defined as the minimum dry period duration that separates two rainy periods that are stochastically independent.
Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.
García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A
2017-01-01
A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.
Hinrichs, Ruth; Frank, Paulo Ricardo Ost; Vasconcellos, M A Z
2017-03-01
Modifications of cotton and polyester textiles due to shots fired at short range were analyzed with a variable pressure scanning electron microscope (VP-SEM). Different mechanisms of fiber rupture as a function of fiber type and shooting distance were detected, namely fusing, melting, scorching, and mechanical breakage. To estimate the firing distance, the approximately exponential decay of GSR coverage as a function of radial distance from the entrance hole was determined from image analysis, instead of relying on chemical analysis with EDX, which is problematic in the VP-SEM. A set of backscattered electron images, with sufficient magnification to discriminate micrometer wide GSR particles, was acquired at different radial distances from the entrance hole. The atomic number contrast between the GSR particles and the organic fibers allowed to find a robust procedure to segment the micrographs into binary images, in which the white pixel count was attributed to GSR coverage. The decrease of the white pixel count followed an exponential decay, and it was found that the reciprocal of the decay constant, obtained from the least-square fitting of the coverage data, showed a linear dependence on the shooting distance. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Variable δD values among major biochemicals in plants: Implications for environmental studies
NASA Astrophysics Data System (ADS)
DeBond, Nicole; Fogel, Marilyn L.; Morrill, Penny L.; Benner, Ronald; Bowden, Roxane; Ziegler, Susan
2013-06-01
The stable hydrogen isotope composition (δD) of major plant biochemicals is variable. We present δD values for cellulose, hemicelluloses and lignin of six plant species. The δD value for lignin is consistently lower than that of bulk tissue (by ˜50‰) and cellulose (by ˜100‰). We show that these differences can be used to assess the extent of degradation of organic matter from a single source. A decrease in the δDbulk of decomposing Spartina alterniflora roots and rhizomes from -72‰ to -87‰ was observed over 18 months, reflecting a relative enrichment of lignin content due to the preferential removal of polysaccharides from the detrital material. Similar changes in δ13C were observed previously during the degradation of these plant tissues. These findings indicate that the extent of organic matter degradation should be considered when using stable isotope approaches to assess possible sources of organic matter in soils and sediments. We show that the change in δDbulk of plant detritus is best described by an exponential equation, which is simpler than the multiple exponential decay (multi-G) model which best describes the change in δ13Cbulk of plant detritus. Therefore correcting for isotopic shifts caused by decomposition may be more easily accomplished using δD.
Physical Principle for Generation of Randomness
NASA Technical Reports Server (NTRS)
Zak, Michail
2009-01-01
A physical principle (more precisely, a principle that incorporates mathematical models used in physics) has been conceived as the basis of a method of generating randomness in Monte Carlo simulations. The principle eliminates the need for conventional random-number generators. The Monte Carlo simulation method is among the most powerful computational methods for solving high-dimensional problems in physics, chemistry, economics, and information processing. The Monte Carlo simulation method is especially effective for solving problems in which computational complexity increases exponentially with dimensionality. The main advantage of the Monte Carlo simulation method over other methods is that the demand on computational resources becomes independent of dimensionality. As augmented by the present principle, the Monte Carlo simulation method becomes an even more powerful computational method that is especially useful for solving problems associated with dynamics of fluids, planning, scheduling, and combinatorial optimization. The present principle is based on coupling of dynamical equations with the corresponding Liouville equation. The randomness is generated by non-Lipschitz instability of dynamics triggered and controlled by feedback from the Liouville equation. (In non-Lipschitz dynamics, the derivatives of solutions of the dynamical equations are not required to be bounded.)
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Qualitatively Assessing Randomness in SVD Results
NASA Astrophysics Data System (ADS)
Lamb, K. W.; Miller, W. P.; Kalra, A.; Anderson, S.; Rodriguez, A.
2012-12-01
Singular Value Decomposition (SVD) is a powerful tool for identifying regions of significant co-variability between two spatially distributed datasets. SVD has been widely used in atmospheric research to define relationships between sea surface temperatures, geopotential height, wind, precipitation and streamflow data for myriad regions across the globe. A typical application for SVD is to identify leading climate drivers (as observed in the wind or pressure data) for a particular hydrologic response variable such as precipitation, streamflow, or soil moisture. One can also investigate the lagged relationship between a climate variable and the hydrologic response variable using SVD. When performing these studies it is important to limit the spatial bounds of the climate variable to reduce the chance of random co-variance relationships being identified. On the other hand, a climate region that is too small may ignore climate signals which have more than a statistical relationship to a hydrologic response variable. The proposed research seeks to identify a qualitative method of identifying random co-variability relationships between two data sets. The research identifies the heterogeneous correlation maps from several past results and compares these results with correlation maps produced using purely random and quasi-random climate data. The comparison identifies a methodology to determine if a particular region on a correlation map may be explained by a physical mechanism or is simply statistical chance.
Quantum mechanics of conformally and minimally coupled Friedmann-Robertson-Walker cosmology
NASA Astrophysics Data System (ADS)
Kim, Sang Pyo
1992-10-01
The expansion method by a time-dependent basis of the eigenfunctions for the space-coordinate-dependent sub-Hamiltonian is one of the most natural frameworks for quantum systems, relativistic as well as nonrelativistic. The complete set of wave functions is found in the product integral formulation, whose constants of integration are fixed by Cauchy initial data. The wave functions for the Friedmann-Robertson-Walker (FRW) cosmology conformally and minimally coupled to a scalar field with a power-law potential or a polynomial potential are expanded in terms of the eigenfunctions of the scalar field sub-Hamiltonian part. The resultant gravitational field part which is an ``intrinsic'' timelike variable-dependent matrix-valued differential equation is solved again in the product integral formulation. There are classically allowed regions for the ``intrinsic'' timelike variable depending on the scalar field quantum numbers and these regions increase accordingly as the quantum numbers increase. For a fixed large three-geometry the wave functions corresponding to the low excited (small quantum number) states of the scalar field are exponentially damped or diverging and the wave functions corresponding to the high excited (large quantum number) states are still oscillatory but become eventually exponential as the three-geometry becomes larger. Furthermore, a proposal is advanced that the wave functions exponentially damped for a large three-geometry may be interpreted as ``tunneling out'' wave functions into, and the wave functions exponentially diverging as ``tunneling in'' from, different universes with the same or different topologies, the former being interpreted as the recently proposed Hawking-Page wormhole wave functions. It is observed that there are complex as well as Euclidean actions depending on the quantum numbers of the scalar field part outside the classically allowed region both of the gravitational and scalar fields, suggesting the usefulness of complex geometry and complex trajectories. From the most general wave functions for the FRW cosmology conformally coupled to scalar field, the boundary conditions for the wormhole wave functions are modified so that the modulus of wave functions, instead of the wave functions themselves, should be exponentially damped for a large three-geometry and be regular up to some negative power of the three-geometry as the three-geometry collapses. The wave functions for the FRW cosmology minimally coupled to an inhomogeneous scalar field are similarly found in the product integral formulation. The role of a large number of the inhomogeneous modes of the scalar field is not only to increase the classically allowed regions for the gravitational part but also to provide a mechanism of the decoherence of quantum interferences between the different sizes of the universe.
Kinetic market models with single commodity having price fluctuations
NASA Astrophysics Data System (ADS)
Chatterjee, A.; Chakrabarti, B. K.
2006-12-01
We study here numerically the behavior of an ideal gas like model of markets having only one non-consumable commodity. We investigate the behavior of the steady-state distributions of money, commodity and total wealth, as the dynamics of trading or exchange of money and commodity proceeds, with local (in time) fluctuations in the price of the commodity. These distributions are studied in markets with agents having uniform and random saving factors. The self-organizing features in money distribution are similar to the cases without any commodity (or with consumable commodities), while the commodity distribution shows an exponential decay. The wealth distribution shows interesting behavior: gamma like distribution for uniform saving propensity and has the same power-law tail, as that of the money distribution, for a market with agents having random saving propensity.
Self-excitation of a nonlinear scalar field in a random medium
Zeldovich, Ya. B.; Molchanov, S. A.; Ruzmaikin, A. A.; Sokoloff, D. D.
1987-01-01
We discuss the evolution in time of a scalar field under the influence of a random potential and diffusion. The cases of a short-correlation in time and of stationary potentials are considered. In a linear approximation and for sufficiently weak diffusion, the statistical moments of the field grow exponentially in time at growth rates that progressively increase with the order of the moment; this indicates the intermittent nature of the field. Nonlinearity halts this growth and in some cases can destroy the intermittency. However, in many nonlinear situations the intermittency is preserved: high, persistent peaks of the field exist against the background of a smooth field distribution. These widely spaced peaks may make a major contribution to the average characteristics of the field. PMID:16593872
Droplet localization in the random XXZ model and its manifestations
NASA Astrophysics Data System (ADS)
Elgart, A.; Klein, A.; Stolz, G.
2018-01-01
We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb-Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.
NASA Astrophysics Data System (ADS)
Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih
2017-04-01
Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.
Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory
NASA Astrophysics Data System (ADS)
Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick
2018-05-01
For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.
Universal Quantum Computing with Arbitrary Continuous-Variable Encoding.
Lau, Hoi-Kwan; Plenio, Martin B
2016-09-02
Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.
Universal Quantum Computing with Arbitrary Continuous-Variable Encoding
NASA Astrophysics Data System (ADS)
Lau, Hoi-Kwan; Plenio, Martin B.
2016-09-01
Implementing a qubit quantum computer in continuous-variable systems conventionally requires the engineering of specific interactions according to the encoding basis states. In this work, we present a unified formalism to conduct universal quantum computation with a fixed set of operations but arbitrary encoding. By storing a qubit in the parity of two or four qumodes, all computing processes can be implemented by basis state preparations, continuous-variable exponential-swap operations, and swap tests. Our formalism inherits the advantages that the quantum information is decoupled from collective noise, and logical qubits with different encodings can be brought to interact without decoding. We also propose a possible implementation of the required operations by using interactions that are available in a variety of continuous-variable systems. Our work separates the "hardware" problem of engineering quantum-computing-universal interactions, from the "software" problem of designing encodings for specific purposes. The development of quantum computer architecture could hence be simplified.
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Siopsis, George; Weedbrook, Christian
2018-02-01
With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.
Strong feedback limit of the Goodwin circadian oscillator
NASA Astrophysics Data System (ADS)
Woller, Aurore; Gonze, Didier; Erneux, Thomas
2013-03-01
The three-variable Goodwin model constitutes a prototypical oscillator based on a negative feedback loop. It was used as a minimal model for circadian oscillations. Other core models for circadian clocks are variants of the Goodwin model. The Goodwin oscillator also appears in many studies of coupled oscillator networks because of its relative simplicity compared to other biophysical models involving a large number of variables and parameters. Because the synchronization properties of Goodwin oscillators still remain difficult to explore mathematically, further simplifications of the Goodwin model have been sought. In this paper, we investigate the strong negative feedback limit of Goodwin equations by using asymptotic techniques. We find that Goodwin oscillations approach a sequence of decaying exponentials that can be described in terms of a single-variable leaky integrated-and-fire model.
A FORTRAN program for multivariate survival analysis on the personal computer.
Mulder, P G
1988-01-01
In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.
Leider, Jonathon P; Castrucci, Brian C; Harris, Jenine K; Hearne, Shelley
2015-08-06
The relationship between policy networks and policy development among local health departments (LHDs) is a growing area of interest to public health practitioners and researchers alike. In this study, we examine policy activity and ties between public health leadership across large urban health departments. This study uses data from a national profile of local health departments as well as responses from a survey sent to three staff members (local health official, chief of policy, chief science officer) in each of 16 urban health departments in the United States. Network questions related to frequency of contact with health department personnel in other cities. Using exponential random graph models, network density and centrality were examined, as were patterns of communication among those working on several policy areas using exponential random graph models. All 16 LHDs were active in communicating about chronic disease as well as about use of alcohol, tobacco, and other drugs (ATOD). Connectedness was highest among local health officials (density = .55), and slightly lower for chief science officers (d = .33) and chiefs of policy (d = .29). After accounting for organizational characteristics, policy homophily (i.e., when two network members match on a single characteristic) and tenure were the most significant predictors of formation of network ties. Networking across health departments has the potential for accelerating the adoption of public health policies. This study suggests similar policy interests and formation of connections among senior leadership can potentially drive greater connectedness among other staff.
Leider, Jonathon P.; Castrucci, Brian C.; Harris, Jenine K.; Hearne, Shelley
2015-01-01
Background: The relationship between policy networks and policy development among local health departments (LHDs) is a growing area of interest to public health practitioners and researchers alike. In this study, we examine policy activity and ties between public health leadership across large urban health departments. Methods: This study uses data from a national profile of local health departments as well as responses from a survey sent to three staff members (local health official, chief of policy, chief science officer) in each of 16 urban health departments in the United States. Network questions related to frequency of contact with health department personnel in other cities. Using exponential random graph models, network density and centrality were examined, as were patterns of communication among those working on several policy areas using exponential random graph models. Results: All 16 LHDs were active in communicating about chronic disease as well as about use of alcohol, tobacco, and other drugs (ATOD). Connectedness was highest among local health officials (density = .55), and slightly lower for chief science officers (d = .33) and chiefs of policy (d = .29). After accounting for organizational characteristics, policy homophily (i.e., when two network members match on a single characteristic) and tenure were the most significant predictors of formation of network ties. Conclusion: Networking across health departments has the potential for accelerating the adoption of public health policies. This study suggests similar policy interests and formation of connections among senior leadership can potentially drive greater connectedness among other staff. PMID:26258784
A method of examining the structure and topological properties of public-transport networks
NASA Astrophysics Data System (ADS)
Dimitrov, Stavri Dimitri; Ceder, Avishai (Avi)
2016-06-01
This work presents a new method of examining the structure of public-transport networks (PTNs) and analyzes their topological properties through a combination of computer programming, statistical data and large-network analyses. In order to automate the extraction, processing and exporting of data, a software program was developed allowing to extract the needed data from General Transit Feed Specification, thus overcoming difficulties occurring in accessing and collecting data. The proposed method was applied to a real-life PTN in Auckland, New Zealand, with the purpose of examining whether it showed characteristics of scale-free networks and exhibited features of ;small-world; networks. As a result, new regression equations were derived analytically describing observed, strong, non-linear relationships among the probabilities of randomly chosen stops in the PTN to be serviced by a given number of routes. The established dependence is best fitted by an exponential rather than a power-law function, showing that the PTN examined is neither random nor scale-free, but a mixture of the two. This finding explains the presence of hubs that are not typical of exponential networks and simultaneously not highly connected to the other nodes as is the case with scale-free networks. On the other hand, the observed values of the topological properties of the network show that although it is highly clustered, owing to its representation as a directed graph, it differs slightly from ;small-world; networks, which are characterized by strong clustering and a short average path length.
Stochastic modelling of wall stresses in abdominal aortic aneurysms treated by a gene therapy.
Mohand-Kaci, Faïza; Ouni, Anissa Eddhahak; Dai, Jianping; Allaire, Eric; Zidi, Mustapha
2012-01-01
A stochastic mechanical model using the membrane theory was used to simulate the in vivo mechanical behaviour of abdominal aortic aneurysms (AAAs) in order to compute the wall stresses after stabilisation by gene therapy. For that, both length and diameter of AAAs rats were measured during their expansion. Four groups of animals, control and treated by an endovascular gene therapy during 3 or 28 days were included. The mechanical problem was solved analytically using the geometric parameters and assuming the shape of aneurysms by a 'parabolic-exponential curve'. When compared to controls, stress variations in the wall of AAAs for treated arteries during 28 days decreased, while they were nearly constant at day 3. The measured geometric parameters of AAAs were then investigated using probability density functions (pdf) attributed to every random variable. Different trials were useful to define a reliable confidence region in which the probability to have a realisation is equal to 99%. The results demonstrated that the error in the estimation of the stresses can be greater than 28% when parameters uncertainties are not considered in the modelling. The relevance of the proposed approach for the study of AAA growth may be studied further and extended to other treatments aimed at stabilisation AAAs, using biotherapies and pharmacological approaches.
Quantum Computing in Fock Space Systems
NASA Astrophysics Data System (ADS)
Berezin, Alexander A.
1997-04-01
Fock space system (FSS) has unfixed number (N) of particles and/or degrees of freedom. In quantum computing (QC) main requirement is sustainability of coherent Q-superpositions. This normally favoured by low noise environment. High excitation/high temperature (T) limit is hence discarded as unfeasible for QC. Conversely, if N is itself a quantized variable, the dimensionality of Hilbert basis for qubits may increase faster (say, N-exponentially) than thermal noise (likely, in powers of N and T). Hence coherency may win over T-randomization. For this type of QC speed (S) of factorization of long integers (with D digits) may increase with D (for 'ordinary' QC speed polynomially decreases with D). This (apparent) paradox rests on non-monotonic bijectivity (cf. Georg Cantor's diagonal counting of rational numbers). This brings entire aleph-null structurality ("Babylonian Library" of infinite informational content of integer field) to superposition determining state of quantum analogue of Turing machine head. Structure of integer infinititude (e.g. distribution of primes) results in direct "Platonic pressure" resembling semi-virtual Casimir efect (presure of cut-off vibrational modes). This "effect", the embodiment of Pythagorean "Number is everything", renders Godelian barrier arbitrary thin and hence FSS-based QC can in principle be unlimitedly efficient (e.g. D/S may tend to zero when D tends to infinity).
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Nikpour, Ahmad
2013-09-01
In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.
NASA Astrophysics Data System (ADS)
Chen, Cheng; Xu, Weijie; Guo, Tong; Chen, Kai
2017-10-01
Uncertainties in structure properties can result in different responses in hybrid simulations. Quantification of the effect of these uncertainties would enable researchers to estimate the variances of structural responses observed from experiments. This poses challenges for real-time hybrid simulation (RTHS) due to the existence of actuator delay. Polynomial chaos expansion (PCE) projects the model outputs on a basis of orthogonal stochastic polynomials to account for influences of model uncertainties. In this paper, PCE is utilized to evaluate effect of actuator delay on the maximum displacement from real-time hybrid simulation of a single degree of freedom (SDOF) structure when accounting for uncertainties in structural properties. The PCE is first applied for RTHS without delay to determine the order of PCE, the number of sample points as well as the method for coefficients calculation. The PCE is then applied to RTHS with actuator delay. The mean, variance and Sobol indices are compared and discussed to evaluate the effects of actuator delay on uncertainty quantification for RTHS. Results show that the mean and the variance of the maximum displacement increase linearly and exponentially with respect to actuator delay, respectively. Sensitivity analysis through Sobol indices also indicates the influence of the single random variable decreases while the coupling effect increases with the increase of actuator delay.
Design approaches to experimental mediation☆
Pirlott, Angela G.; MacKinnon, David P.
2016-01-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259
Design approaches to experimental mediation.
Pirlott, Angela G; MacKinnon, David P
2016-09-01
Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.
Rainfall continuous time stochastic simulation for a wet climate in the Cantabric Coast
NASA Astrophysics Data System (ADS)
Rebole, Juan P.; Lopez, Jose J.; Garcia-Guzman, Adela
2010-05-01
Rain is the result of a series of complex atmospheric processes which are influenced by numerous factors. This complexity makes its simulation practically unfeasible from a physical basis, advising the use of stochastic diagrams. These diagrams, which are based on observed characteristics (Todorovic and Woolhiser, 1975), allow the introduction of renewal alternating processes, that account for the occurrence of rainfall at different time lapses (Markov chains are a particular case, where lapses can be described by exponential distributions). Thus, a sequential rainfall process can be defined as a temporal series in which rainfall events (periods in which rainfall is recorded) alternate with non rain events (periods in which no rainfall is recorded). The variables of a temporal rain sequence have been characterized (duration of the rainfall event, duration of the non rainfall event, average intensity of the rain in the rain event, and a temporal distribution of the amount of rain in the rain event) in a wet climate such as that of the coastal area of Guipúzcoa. The study has been performed from two series recorded at the meteorological stations of Igueldo-San Sebastián and Fuenterrabia / Airport (data every ten minutes and for its hourly aggregation). As a result of this work, the variables satisfactorily fitted the following distribution functions: the duration of the rain event to a exponential function; the duration of the dry event to a truncated exponential mixed distribution; the average intensity to a Weibull distribution; and the distribution of the rain fallen to the Beta distribution. The characterization was made for an hourly aggregation of the recorded interval of ten minutes. The parameters of the fitting functions were better obtained by means of the maximum likelihood method than the moment method. The parameters obtained from the characterization were used to develop a stochastic rainfall process simulation model by means of a three states Markov chain (Hutchinson, 1990), performed in an hourly basis by García-Guzmán (1993) and Castro et al. (1997, 2005 ). Simulation process results were valid in the hourly case for all the four described variables, with a slightly better response in Fuenterrabia than in Igueldo. In summary, all the variables were better simulated in Fuenterrabia than in Igueldo. Fuenterrabia data series is shorter and with longer sequences without missing data, compared to Igueldo. The latter shows higher number of missing data events, whereas its mean duration is longer in Fuenterrabia.
Zillmer, Rüdiger; Brunel, Nicolas; Hansel, David
2009-03-01
We present results of an extensive numerical study of the dynamics of networks of integrate-and-fire neurons connected randomly through inhibitory interactions. We first consider delayed interactions with infinitely fast rise and decay. Depending on the parameters, the network displays transients which are short or exponentially long in the network size. At the end of these transients, the dynamics settle on a periodic attractor. If the number of connections per neuron is large ( approximately 1000) , this attractor is a cluster state with a short period. In contrast, if the number of connections per neuron is small ( approximately 100) , the attractor has complex dynamics and very long period. During the long transients the neurons fire in a highly irregular manner. They can be viewed as quasistationary states in which, depending on the coupling strength, the pattern of activity is asynchronous or displays population oscillations. In the first case, the average firing rates and the variability of the single-neuron activity are well described by a mean-field theory valid in the thermodynamic limit. Bifurcations of the long transient dynamics from asynchronous to synchronous activity are also well predicted by this theory. The transient dynamics display features reminiscent of stable chaos. In particular, despite being linearly stable, the trajectories of the transient dynamics are destabilized by finite perturbations as small as O(1/N) . We further show that stable chaos is also observed for postsynaptic currents with finite decay time. However, we report in this type of network that chaotic dynamics characterized by positive Lyapunov exponents can also be observed. We show in fact that chaos occurs when the decay time of the synaptic currents is long compared to the synaptic delay, provided that the network is sufficiently large.
Porous media flux sensitivity to pore-scale geostatistics: A bottom-up approach
NASA Astrophysics Data System (ADS)
Di Palma, P. R.; Guyennon, N.; Heße, F.; Romano, E.
2017-04-01
Macroscopic properties of flow through porous media can be directly computed by solving the Navier-Stokes equations at the scales related to the actual flow processes, while considering the porous structures in an explicit way. The aim of this paper is to investigate the effects of the pore-scale spatial distribution on seepage velocity through numerical simulations of 3D fluid flow performed by the lattice Boltzmann method. To this end, we generate multiple random Gaussian fields whose spatial correlation follows an assigned semi-variogram function. The Exponential and Gaussian semi-variograms are chosen as extreme-cases of correlation for short distances and statistical properties of the resulting porous media (indicator field) are described using the Matèrn covariance model, with characteristic lengths of spatial autocorrelation (pore size) varying from 2% to 13% of the linear domain. To consider the sensitivity of the modeling results to the geostatistical representativeness of the domain as well as to the adopted resolution, porous media have been generated repetitively with re-initialized random seeds and three different resolutions have been tested for each resulting realization. The main difference among results is observed between the two adopted semi-variograms, indicating that the roughness (short distances autocorrelation) is the property mainly affecting the flux. However, computed seepage velocities show additionally a wide variability (about three orders of magnitude) for each semi-variogram model in relation to the assigned correlation length, corresponding to pore sizes. The spatial resolution affects more the results for short correlation lengths (i.e., small pore sizes), resulting in an increasing underestimation of the seepage velocity with the decreasing correlation length. On the other hand, results show an increasing uncertainty as the correlation length approaches the domain size.
NASA Astrophysics Data System (ADS)
Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian
2017-04-01
Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
Compliance-Effect Correlation Bias in Instrumental Variables Estimators
ERIC Educational Resources Information Center
Reardon, Sean F.
2010-01-01
Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
NASA Astrophysics Data System (ADS)
Malpathak, Shreyas; Ma, Xinyou; Hase, William L.
2018-04-01
In a previous UB3LYP/6-31G* direct dynamics simulation, non-Rice-Ramsperger-Kassel-Marcus (RRKM) unimolecular dynamics was found for vibrationally excited 1,2-dioxetane (DO); [R. Sun et al., J. Chem. Phys. 137, 044305 (2012)]. In the work reported here, these dynamics are studied in more detail using the same direct dynamics method. Vibrational modes of DO were divided into 4 groups, based on their characteristic motions, and each group excited with the same energy. To compare with the dynamics of these groups, an additional group of trajectories comprising a microcanonical ensemble was also simulated. The results of these simulations are consistent with the previous study. The dissociation probability, N(t)/N(0), for these excitation groups were all different. Groups A, B, and C, without initial excitation in the O-O stretch reaction coordinate, had a time lag to of 0.25-1.0 ps for the first dissociation to occur. Somewhat surprisingly, the C-H stretch Group A and out-of-plane motion Group C excitations had exponential dissociation probabilities after to, with a rate constant ˜2 times smaller than the anharmonic RRKM value. Groups B and D, with excitation of the H-C-H bend and wag, and ring bend and stretch modes, respectively, had bi-exponential dissociation probabilities. For Group D, with excitation localized in the reaction coordinate, the initial rate constant is ˜7 times larger than the anharmonic RRKM value, substantial apparent non-RRKM dynamics. N(t)/N(0) for the random excitation trajectories was non-exponential, indicating intrinsic non-RRKM dynamics. For the trajectory integration time of 13.5 ps, 9% of these trajectories did not dissociate in comparison to the RRKM prediction of 0.3%. Classical power spectra for these trajectories indicate they have regular intramolecular dynamics. The N(t)/N(0) for the excitation groups are well described by a two-state coupled phase space model. From the intercept of N(t)/N(0) with random excitation, the anharmonic correction to the RRKM rate constant is approximately a factor of 1.5.
Existence and energy decay of a nonuniform Timoshenko system with second sound
NASA Astrophysics Data System (ADS)
Hamadouche, Taklit; Messaoudi, Salim A.
2018-02-01
In this paper, we consider a linear thermoelastic Timoshenko system with variable physical parameters, where the heat conduction is given by Cattaneo's law and the coupling is via the displacement equation. We discuss the well-posedness and the regularity of solution using the semigroup theory. Moreover, we establish the exponential decay result provided that the stability function χ r(x)=0. Otherwise, we show that the solution decays polynomially.
Time-Frequency Signal Representations Using Interpolations in Joint-Variable Domains
2016-06-14
distribution kernels,” IEEE Trans. Signal Process., vol. 42, no. 5, pp. 1156–1165, May 1994. [25] G. S. Cunningham and W. J. Williams , “Kernel...interpolated data. For comparison, we include sparse reconstruction and WVD and Choi– Williams distribution (CWD) [23], which are directly applied to...Prentice-Hall, 1995. [23] H. I. Choi and W. J. Williams , “Improved time-frequency representa- tion of multicomponent signals using exponential kernels
NASA Technical Reports Server (NTRS)
Rivera, J. M.; Simpson, R. W.
1980-01-01
The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.
Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls
NASA Astrophysics Data System (ADS)
Guha Ray, A.; Baidya, D. K.
2012-09-01
Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.
Human infrastructure and invasive plant occurrence across rangelands of southwestern Wyoming, U.S.A.
Manier, Daniel J.; Aldridge, Cameron L.; O'Donnell, Michael S.; Schell, Spencer
2014-01-01
Although human influence across rural landscapes is often discussed, interactions between the native, natural systems and human activities are challenging to measure explicitly. We assessed the distribution of introduced, invasive species as related to anthropogenic infrastructure and environmental conditions across southwestern Wyoming. to discern direct correlations as well as covariate influences between land use, land cover, and abundance of invasive plants, and assess the supposition that these features affect surrounding rangeland conditions. Our sample units were 1 000 m long and extended outward from target features, which included roads, oil and gas well pads, pipelines, power lines, and featureless background sites. Sample sites were distributed across the region using a stratified, random design with a frame that represented features and land-use intensity. In addition to land-use gradients, we captured a representative, but limited, range of variability in climate, soils, geology, topography, and dominant vegetation. Several of these variables proved significant, in conjunction with distance from anthropogenic features, in regression models of invasive plant abundance. We used general linear models to demonstrate and compare associations between invasive plant frequency and Euclidian distance from features, natural logarithm transformed distances (log-linear), and environmental variables which were presented as potential covariates. We expected a steep curvilinear (log or exponential) decline trending towards an asymptote along the axis representing high abundance near features with rapid decrease beyond approximately 50–100 m. Some of the associations we document exhibit this pattern, but we also found some invasive plant distributions that extended beyond our expectations, suggesting a broader distribution than anticipated. Our results provide details that can inform local efforts for management and control of invasive species, and they provide evidence of the different associations between natural patterns and human land use exhibited by nonnative species in this rural setting, such as the indirect effects of humans beyond impact areas.
Biased phylodynamic inferences from analysing clusters of viral sequences
Xiang, Fei; Frost, Simon D. W.
2017-01-01
Abstract Phylogenetic methods are being increasingly used to help understand the transmission dynamics of measurably evolving viruses, including HIV. Clusters of highly similar sequences are often observed, which appear to follow a ‘power law’ behaviour, with a small number of very large clusters. These clusters may help to identify subpopulations in an epidemic, and inform where intervention strategies should be implemented. However, clustering of samples does not necessarily imply the presence of a subpopulation with high transmission rates, as groups of closely related viruses can also occur due to non-epidemiological effects such as over-sampling. It is important to ensure that observed phylogenetic clustering reflects true heterogeneity in the transmitting population, and is not being driven by non-epidemiological effects. We qualify the effect of using a falsely identified ‘transmission cluster’ of sequences to estimate phylodynamic parameters including the effective population size and exponential growth rate under several demographic scenarios. Our simulation studies show that taking the maximum size cluster to re-estimate parameters from trees simulated under a randomly mixing, constant population size coalescent process systematically underestimates the overall effective population size. In addition, the transmission cluster wrongly resembles an exponential or logistic growth model 99% of the time. We also illustrate the consequences of false clusters in exponentially growing coalescent and birth-death trees, where again, the growth rate is skewed upwards. This has clear implications for identifying clusters in large viral databases, where a false cluster could result in wasted intervention resources. PMID:28852573
Anderson localization for radial tree-like random quantum graphs
NASA Astrophysics Data System (ADS)
Hislop, Peter D.; Post, Olaf
We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.
Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor
2005-12-01
Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.
Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie
2017-06-01
In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
NASA Astrophysics Data System (ADS)
Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele
2016-07-01
This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros
ERIC Educational Resources Information Center
Bancroft, Stacie L.; Bourret, Jason C.
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Random Variables: Simulations and Surprising Connections.
ERIC Educational Resources Information Center
Quinn, Robert J.; Tomlinson, Stephen
1999-01-01
Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
Do bioclimate variables improve performance of climate envelope models?
Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.
2012-01-01
Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.
Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.
2013-01-01
Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738
Quantum Machine Learning over Infinite Dimensions
Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George; ...
2017-02-21
Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less
Quantum Machine Learning over Infinite Dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Hoi-Kwan; Pooser, Raphael; Siopsis, George
Machine learning is a fascinating and exciting eld within computer science. Recently, this ex- citement has been transferred to the quantum information realm. Currently, all proposals for the quantum version of machine learning utilize the nite-dimensional substrate of discrete variables. Here we generalize quantum machine learning to the more complex, but still remarkably practi- cal, in nite-dimensional systems. We present the critical subroutines of quantum machine learning algorithms for an all-photonic continuous-variable quantum computer that achieve an exponential speedup compared to their equivalent classical counterparts. Finally, we also map out an experi- mental implementation which can be used as amore » blueprint for future photonic demonstrations.« less
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-09-01
Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.
Anosov C-systems and random number generators
NASA Astrophysics Data System (ADS)
Savvidy, G. K.
2016-08-01
We further develop our previous proposal to use hyperbolic Anosov C-systems to generate pseudorandom numbers and to use them for efficient Monte Carlo calculations in high energy particle physics. All trajectories of hyperbolic dynamical systems are exponentially unstable, and C-systems therefore have mixing of all orders, a countable Lebesgue spectrum, and a positive Kolmogorov entropy. These exceptional ergodic properties follow from the C-condition introduced by Anosov. This condition defines a rich class of dynamical systems forming an open set in the space of all dynamical systems. An important property of C-systems is that they have a countable set of everywhere dense periodic trajectories and their density increases exponentially with entropy. Of special interest are the C-systems defined on higher-dimensional tori. Such C-systems are excellent candidates for generating pseudorandom numbers that can be used in Monte Carlo calculations. An efficient algorithm was recently constructed that allows generating long C-system trajectories very rapidly. These trajectories have good statistical properties and can be used for calculations in quantum chromodynamics and in high energy particle physics.
Minor, A V; Kaissling, K-E
2003-03-01
Olfactory receptor cells of the silkmoth Bombyx mori respond to single pheromone molecules with "elementary" electrical events that appear as discrete "bumps" a few milliseconds in duration, or bursts of bumps. As revealed by simulation, one bump may result from a series of random openings of one or several ion channels, producing an average inward membrane current of 1.5 pA. The distributions of durations of bumps and of gaps between bumps in a burst can be fitted by single exponentials with time constants of 10.2 ms and 40.5 ms, respectively. The distribution of burst durations is a sum of two exponentials; the number of bumps per burst obeyed a geometric distribution (mean 3.2 bumps per burst). Accordingly the elementary events could reflect transitions among three states of the pheromone receptor molecule: the vacant receptor (state 1), the pheromone-receptor complex (state 2), and the activated complex (state 3). The calculated rate constants of the transitions between states are k(21)=7.7 s(-1), k(23)=16.8 s(-1), and k(32)=98 s(-1).
Direct observation of molecular cooperativity near the glass transition.
Russell, E V; Israeloff, N E
2000-12-07
The increasingly sluggish response of a supercooled liquid as it nears its glass transition (for example, refrigerated honey) is prototypical of glassy dynamics found in proteins, neural networks and superconductors. The notion that molecules rearrange cooperatively has long been postulated to explain diverging relaxation times and broadened (non-exponential) response functions near the glass transition. Recently, cooperativity was observed and analysed in colloid glasses and in simulations of binary liquids well above the glass transition. But nanometre-scale studies of cooperativity at the molecular glass transition are lacking. Important issues to be resolved include the precise form of the cooperativity and its length scale, and whether the broadened response is intrinsic to individual cooperative regions, or arises only from heterogeneity in an ensemble of such regions. Here we describe direct observations of molecular cooperativity near the glass transition in polyvinylacetate (PVAc), using nanometre-scale probing of dielectric fluctuations. Molecular clusters switched spontaneously among two to four distinct configurations, producing random telegraph noise. Our analysis of these noise signals and their power spectra reveals that individual clusters exhibit transient dynamical heterogeneity and non-exponential kinetics.
Tilted hexagonal post arrays: DNA electrophoresis in anisotropic media
Chen, Zhen; Dorfman, Kevin D.
2013-01-01
Using Brownian dynamics simulations, we show that DNA electrophoresis in a hexagonal array of micron-sized posts changes qualitatively when the applied electric field vector is not coincident with the lattice vectors of the array. DNA electrophoresis in such “tilted” post arrays is superior to the standard “un-tilted” approach; while the time required to achieve a resolution of unity in a tilted post array is similar to an un-tilted array at a low electric field strengths, this time (i) decreases exponentially with electric field strength in a tilted array and (ii) increases exponentially with electric field strength in an un-tilted array. Although the DNA dynamics in a post array are complicated, the electrophoretic mobility results indicate that the “free path”, i.e., the average distance of ballistic trajectories of point sized particles launched from random positions in the unit cell until they intersect the next post, is a useful proxy for the detailed DNA trajectories. The analysis of the free path reveals a fundamental connection between anisotropy of the medium and DNA transport therein that goes beyond simply improving the separation device. PMID:23868490
High activity and Levy searches: jellyfish can search the water column like fish.
Hays, Graeme C; Bastian, Thomas; Doyle, Thomas K; Fossette, Sabrina; Gleiss, Adrian C; Gravenor, Michael B; Hobson, Victoria J; Humphries, Nicolas E; Lilley, Martin K S; Pade, Nicolas G; Sims, David W
2012-02-07
Over-fishing may lead to a decrease in fish abundance and a proliferation of jellyfish. Active movements and prey search might be thought to provide a competitive advantage for fish, but here we use data-loggers to show that the frequently occurring coastal jellyfish (Rhizostoma octopus) does not simply passively drift to encounter prey. Jellyfish (327 days of data from 25 jellyfish with depth collected every 1 min) showed very dynamic vertical movements, with their integrated vertical movement averaging 619.2 m d(-1), more than 60 times the water depth where they were tagged. The majority of movement patterns were best approximated by exponential models describing normal random walks. However, jellyfish also showed switching behaviour from exponential patterns to patterns best fitted by a truncated Lévy distribution with exponents (mean μ=1.96, range 1.2-2.9) close to the theoretical optimum for searching for sparse prey (μopt≈2.0). Complex movements in these 'simple' animals may help jellyfish to compete effectively with fish for plankton prey, which may enhance their ability to increase in dominance in perturbed ocean systems.
Reconfiguration and Search of Social Networks
Zhang, Lianming; Peng, Aoyuan
2013-01-01
Social networks tend to exhibit some topological characteristics different from regular networks and random networks, such as shorter average path length and higher clustering coefficient, and the node degree of the majority of social networks obeys exponential distribution. Based on the topological characteristics of the real social networks, a new network model which suits to portray the structure of social networks was proposed, and the characteristic parameters of the model were calculated. To find out the relationship between two people in the social network, and using the local information of the social network and the parallel mechanism, a hybrid search strategy based on k-walker random and a high degree was proposed. Simulation results show that the strategy can significantly reduce the average number of search steps, so as to effectively improve the search speed and efficiency. PMID:24574861
Effects of random aspects of cutting tool wear on surface roughness and tool life
NASA Astrophysics Data System (ADS)
Nabil, Ben Fredj; Mabrouk, Mohamed
2006-10-01
The effects of random aspects of cutting tool flank wear on surface roughness and on tool lifetime, when turning the AISI 1045 carbon steel, were studied in this investigation. It was found that standard deviations corresponding to tool flank wear and to the surface roughness increase exponentially with cutting time. Under cutting conditions that correspond to finishing operations, no significant differences were found between the calculated values of the capability index C p at the steady-state region of the tool flank wear, using the best-fit method or the Box-Cox transformation, or by making the assumption that the surface roughness data are normally distributed. Hence, a method to establish cutting tool lifetime could be established that simultaneously respects the desired average of surface roughness and the required capability index.
Concentration and variability of ice nuclei in the subtropical maritime boundary layer
NASA Astrophysics Data System (ADS)
Welti, André; Müller, Konrad; Fleming, Zoë L.; Stratmann, Frank
2018-04-01
Measurements of the concentration and variability of ice nucleating particles in the subtropical maritime boundary layer are reported. Filter samples collected in Cabo Verde over the period 2009-2013 are analyzed with a drop freezing experiment with sensitivity to detect the few rare ice nuclei active at low supercooling. The data set is augmented with continuous flow diffusion chamber measurements at temperatures below -24 °C from a 2-month field campaign in Cabo Verde in 2016. The data set is used to address the following questions: what are typical concentrations of ice nucleating particles active at a certain temperature? What affects their concentration and where are their sources? Concentration of ice nucleating particles is found to increase exponentially by 7 orders of magnitude from -5 to -38 °C. Sample-to-sample variation in the steepness of the increase indicates that particles of different origin, with different ice nucleation properties (size, composition), contribute to the ice nuclei concentration at different temperatures. The concentration of ice nuclei active at a specific temperature varies over a range of up to 4 orders of magnitude. The frequency with which a certain ice nuclei concentration is measured within this range is found to follow a lognormal distribution, which can be explained by random dilution during transport. To investigate the geographic origin of ice nuclei, source attribution of air masses from dispersion modeling is used to classify the data into seven typical conditions. While no source could be attributed to the ice nuclei active at temperatures higher than -12 °C, concentrations at lower temperatures tend to be elevated in air masses originating from the Sahara.
Zhang, Chuan; Chen, Hong-Song; Zhang, Wei; Nie, Yun-Peng; Ye, Ying-Ying; Wang, Ke-Lin
2014-06-01
Surface soil water-physical properties play a decisive role in the dynamics of deep soil water. Knowledge of their spatial variation is helpful in understanding the processes of rainfall infiltration and runoff generation, which will contribute to the reasonable utilization of soil water resources in mountainous areas. Based on a grid sampling scheme (10 m x 10 m) and geostatistical methods, this paper aimed to study the spatial variability of surface (0-10 cm) soil water content, soil bulk density and saturated hydraulic conductivity on a typical shrub slope (90 m x 120 m, projected length) in Karst area of northwest Guangxi, southwest China. The results showed that the surface soil water content, bulk density and saturated hydraulic conductivity had different spatial dependence and spatial structure. Sample variogram of the soil water content was fitted well by Gaussian models with the nugget effect, while soil bulk density and saturated hydraulic conductivity were fitted well by exponential models with the nugget effect. Variability of soil water content showed strong spatial dependence, while the soil bulk density and saturated hydraulic conductivity showed moderate spatial dependence. The spatial ranges of the soil water content and saturated hydraulic conductivity were small, while that of the soil bulk density was much bigger. In general, the soil water content increased with the increase of altitude while it was opposite for the soil bulk densi- ty. However, the soil saturated hydraulic conductivity had a random distribution of large amounts of small patches, showing high spatial heterogeneity. Soil water content negatively (P < 0.01) correlated with the bulk density and saturated hydraulic conductivity, while there was no significant correlation between the soil bulk density and saturated hydraulic conductivity.
Discrete sudden perturbation theory for inelastic scattering. I. Quantum and semiclassical treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cross, R.J.
1985-12-01
A double perturbation theory is constructed to treat rotationally and vibrationally inelastic scattering. It uses both the elastic scattering from the spherically averaged potential and the infinite-order sudden (IOS) approximation as the unperturbed solutions. First, a standard perturbation expansion is done to express the radial wave functions in terms of the elastic wave functions. The resulting coupled equations are transformed to the discrete-variable representation where the IOS equations are diagonal. Then, the IOS solutions are removed from the equations which are solved by an exponential perturbation approximation. The results for Ar+N/sub 2/ are very much more accurate than the IOSmore » and somewhat more accurate than a straight first-order exponential perturbation theory. The theory is then converted into a semiclassical, time-dependent form by using the WKB approximation. The result is an integral of the potential times a slowly oscillating factor over the classical trajectory. A method of interpolating the result is given so that the calculation is done at the average velocity for a given transition. With this procedure, the semiclassical version of the theory is more accurate than the quantum version and very much faster. Calculations on Ar+N/sub 2/ show the theory to be much more accurate than the infinite-order sudden (IOS) approximation and the exponential time-dependent perturbation theory.« less
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
Persistent random walk of cells involving anomalous effects and random death
NASA Astrophysics Data System (ADS)
Fedotov, Sergei; Tan, Abby; Zubarev, Andrey
2015-04-01
The purpose of this paper is to implement a random death process into a persistent random walk model which produces sub-ballistic superdiffusion (Lévy walk). We develop a stochastic two-velocity jump model of cell motility for which the switching rate depends upon the time which the cell has spent moving in one direction. It is assumed that the switching rate is a decreasing function of residence (running) time. This assumption leads to the power law for the velocity switching time distribution. This describes the anomalous persistence of cell motility: the longer the cell moves in one direction, the smaller the switching probability to another direction becomes. We derive master equations for the cell densities with the generalized switching terms involving the tempered fractional material derivatives. We show that the random death of cells has an important implication for the transport process through tempering of the superdiffusive process. In the long-time limit we write stationary master equations in terms of exponentially truncated fractional derivatives in which the rate of death plays the role of tempering of a Lévy jump distribution. We find the upper and lower bounds for the stationary profiles corresponding to the ballistic transport and diffusion with the death-rate-dependent diffusion coefficient. Monte Carlo simulations confirm these bounds.
Random field assessment of nanoscopic inhomogeneity of bone
Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu
2010-01-01
Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128
Symmetry reduction and exact solutions of two higher-dimensional nonlinear evolution equations.
Gu, Yongyi; Qi, Jianming
2017-01-01
In this paper, symmetries and symmetry reduction of two higher-dimensional nonlinear evolution equations (NLEEs) are obtained by Lie group method. These NLEEs play an important role in nonlinear sciences. We derive exact solutions to these NLEEs via the [Formula: see text]-expansion method and complex method. Five types of explicit function solutions are constructed, which are rational, exponential, trigonometric, hyperbolic and elliptic function solutions of the variables in the considered equations.
Third All-Union Symposium on Wave Diffraction.
1982-08-02
the Half - Plane of Waves, Formed on the Surface of Liquid and on the Interface in the Laminar Liquid by the Periodically Functioning Source, by...majority of the cases is of basic practical interest. For this way of integration it is displaced into lower half - plane Im xɘ and are computed deductions...and f(x) exponentially decrease, then u(x, p) is continued as meromorphic function for the variable/alternating p into half - plane Re p>-b,
Variability of the Magnetic Field Power Spectrum in the Solar Wind at Electron Scales
NASA Astrophysics Data System (ADS)
Roberts, Owen Wyn; Alexandrova, O.; Kajdič, P.; Turc, L.; Perrone, D.; Escoubet, C. P.; Walsh, A.
2017-12-01
At electron scales, the power spectrum of solar-wind magnetic fluctuations can be highly variable and the dissipation mechanisms of the magnetic energy into the various particle species is under debate. In this paper, we investigate data from the Cluster mission’s STAFF Search Coil magnetometer when the level of turbulence is sufficiently high that the morphology of the power spectrum at electron scales can be investigated. The Cluster spacecraft sample a disturbed interval of plasma where two streams of solar wind interact. Meanwhile, several discontinuities (coherent structures) are seen in the large-scale magnetic field, while at small scales several intermittent bursts of wave activity (whistler waves) are present. Several different morphologies of the power spectrum can be identified: (1) two power laws separated by a break, (2) an exponential cutoff near the Taylor shifted electron scales, and (3) strong spectral knees at the Taylor shifted electron scales. These different morphologies are investigated by using wavelet coherence, showing that, in this interval, a clear break and strong spectral knees are features that are associated with sporadic quasi parallel propagating whistler waves, even for short times. On the other hand, when no signatures of whistler waves at ∼ 0.1{--}0.2{f}{ce} are present, a clear break is difficult to find and the spectrum is often more characteristic of a power law with an exponential cutoff.
First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet
NASA Astrophysics Data System (ADS)
Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan
2017-04-01
The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.
2010-08-01
a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables
Reward and uncertainty in exploration programs
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1971-01-01
A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Randomness Amplification under Minimal Fundamental Assumptions on the Devices
NASA Astrophysics Data System (ADS)
Ramanathan, Ravishankar; Brandão, Fernando G. S. L.; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Wojewódka, Hanna
2016-12-01
Recently, the physically realistic protocol amplifying the randomness of Santha-Vazirani sources producing cryptographically secure random bits was proposed; however, for reasons of practical relevance, the crucial question remained open regarding whether this can be accomplished under the minimal conditions necessary for the task. Namely, is it possible to achieve randomness amplification using only two no-signaling components and in a situation where the violation of a Bell inequality only guarantees that some outcomes of the device for specific inputs exhibit randomness? Here, we solve this question and present a device-independent protocol for randomness amplification of Santha-Vazirani sources using a device consisting of two nonsignaling components. We show that the protocol can amplify any such source that is not fully deterministic into a fully random source while tolerating a constant noise rate and prove the composable security of the protocol against general no-signaling adversaries. Our main innovation is the proof that even the partial randomness certified by the two-party Bell test [a single input-output pair (u* , x* ) for which the conditional probability P (x*|u*) is bounded away from 1 for all no-signaling strategies that optimally violate the Bell inequality] can be used for amplification. We introduce the methodology of a partial tomographic procedure on the empirical statistics obtained in the Bell test that ensures that the outputs constitute a linear min-entropy source of randomness. As a technical novelty that may be of independent interest, we prove that the Santha-Vazirani source satisfies an exponential concentration property given by a recently discovered generalized Chernoff bound.
Spatial and temporal variability of interhemispheric transport times
NASA Astrophysics Data System (ADS)
Wu, Xiaokang; Yang, Huang; Waugh, Darryn W.; Orbe, Clara; Tilmes, Simone; Lamarque, Jean-Francois
2018-05-01
The seasonal and interannual variability of transport times from the northern midlatitude surface into the Southern Hemisphere is examined using simulations of three idealized age
tracers: an ideal age tracer that yields the mean transit time from northern midlatitudes and two tracers with uniform 50- and 5-day decay. For all tracers the largest seasonal and interannual variability occurs near the surface within the tropics and is generally closely coupled to movement of the Intertropical Convergence Zone (ITCZ). There are, however, notable differences in variability between the different tracers. The largest seasonal and interannual variability in the mean age is generally confined to latitudes spanning the ITCZ, with very weak variability in the southern extratropics. In contrast, for tracers subject to spatially uniform exponential loss the peak variability tends to be south of the ITCZ, and there is a smaller contrast between tropical and extratropical variability. These differences in variability occur because the distribution of transit times from northern midlatitudes is very broad and tracers with more rapid loss are more sensitive to changes in fast transit times than the mean age tracer. These simulations suggest that the seasonal-interannual variability in the southern extratropics of trace gases with predominantly NH midlatitude sources may differ depending on the gases' chemical lifetimes.