NASA Astrophysics Data System (ADS)
Shintani, Masaru; Umeno, Ken
2018-04-01
The power law is present ubiquitously in nature and in our societies. Therefore, it is important to investigate the characteristics of power laws in the current era of big data. In this paper we prove that the superposition of non-identical stochastic processes with power laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws that the sums of the logarithmic returns of non-identical stock price fluctuations follow stable distributions.
A Pearson Random Walk with Steps of Uniform Orientation and Dirichlet Distributed Lengths
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2010-08-01
A constrained diffusive random walk of n steps in ℝ d and a random flight in ℝ d , which are equivalent, were investigated independently in recent papers (J. Stat. Phys. 127:813, 2007; J. Theor. Probab. 20:769, 2007, and J. Stat. Phys. 131:1039, 2008). The n steps of the walk are independent and identically distributed random vectors of exponential length and uniform orientation. Conditioned on the sum of their lengths being equal to a given value l, closed-form expressions for the distribution of the endpoint of the walk were obtained altogether for any n for d=1,2,4. Uniform distributions of the endpoint inside a ball of radius l were evidenced for a walk of three steps in 2D and of two steps in 4D. The previous walk is generalized by considering step lengths which have independent and identical gamma distributions with a shape parameter q>0. Given the total walk length being equal to 1, the step lengths have a Dirichlet distribution whose parameters are all equal to q. The walk and the flight above correspond to q=1. Simple analytical expressions are obtained for any d≥2 and n≥2 for the endpoint distributions of two families of walks whose q are integers or half-integers which depend solely on d. These endpoint distributions have a simple geometrical interpretation. Expressed for a two-step planar walk whose q=1, it means that the distribution of the endpoint on a disc of radius 1 is identical to the distribution of the projection on the disc of a point M uniformly distributed over the surface of the 3D unit sphere. Five additional walks, with a uniform distribution of the endpoint in the inside of a ball, are found from known finite integrals of products of powers and Bessel functions of the first kind. They include four different walks in ℝ3, two of two steps and two of three steps, and one walk of two steps in ℝ4. Pearson-Liouville random walks, obtained by distributing the total lengths of the previous Pearson-Dirichlet walks according to some specified probability law are finally discussed. Examples of unconstrained random walks, whose step lengths are gamma distributed, are more particularly considered.
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
Superslow relaxation in identical phase oscillators with random and frustrated interactions
NASA Astrophysics Data System (ADS)
Daido, H.
2018-04-01
This paper is concerned with the relaxation dynamics of a large population of identical phase oscillators, each of which interacts with all the others through random couplings whose parameters obey the same Gaussian distribution with the average equal to zero and are mutually independent. The results obtained by numerical simulation suggest that for the infinite-size system, the absolute value of Kuramoto's order parameter exhibits superslow relaxation, i.e., 1/ln t as time t increases. Moreover, the statistics on both the transient time T for the system to reach a fixed point and the absolute value of Kuramoto's order parameter at t = T are also presented together with their distribution densities over many realizations of the coupling parameters.
Multivariate η-μ fading distribution with arbitrary correlation model
NASA Astrophysics Data System (ADS)
Ghareeb, Ibrahim; Atiani, Amani
2018-03-01
An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.
A random matrix approach to credit risk.
Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
A Random Matrix Approach to Credit Risk
Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
System Lifetimes, The Memoryless Property, Euler's Constant, and Pi
ERIC Educational Resources Information Center
Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon
2013-01-01
A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…
Random trinomial tree models and vanilla options
NASA Astrophysics Data System (ADS)
Ganikhodjaev, Nasir; Bayram, Kamola
2013-09-01
In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.
2014-08-01
consensus algorithm called randomized gossip is more suitable [7, 8]. In asynchronous randomized gossip algorithms, pairs of neighboring nodes exchange...messages and perform updates in an asynchronous and unattended manner, and they also 1 The class of broadcast gossip algorithms [9, 10, 11, 12] are...dynamics [2] and asynchronous pairwise randomized gossip [7, 8], broadcast gossip algorithms do not require that nodes know the identities of their
Collisional evolution of rotating, non-identical particles. [in Saturn rings
NASA Technical Reports Server (NTRS)
Salo, H.
1987-01-01
Hameen-Anttila's (1984) theory of self-gravitating collisional particle disks is extended to include the effects of particle spin. Equations are derived for the coupled evolution of random velocities and spins, showing that friction and surface irregularity both reduce the local velocity dispersion and transfer significant amounts of random kinetic energy to rotational energy. Results for the equilibrium ratio of rotational energy to random kinetic energy are exact not only for identical nongravitating mass points, but also if finite size, self-gravitating forces, or size distribution are included. The model is applied to the dynamics of Saturn's rings, showing that the inclusion of rotation reduces the geometrical thickness of the layer of cm-sized particles to, at most, about one-half, with large particles being less affected.
Random Visitor: Defense against Identity Attacks in P2P Networks
NASA Astrophysics Data System (ADS)
Gu, Jabeom; Nah, Jaehoon; Kwon, Hyeokchan; Jang, Jonsoo; Park, Sehyun
Various advantages of cooperative peer-to-peer networks are strongly counterbalanced by the open nature of a distributed, serverless network. In such networks, it is relatively easy for an attacker to launch various attacks such as misrouting, corrupting, or dropping messages as a result of a successful identifier forgery. The impact of an identifier forgery is particularly severe because the whole network can be compromised by attacks such as Sybil or Eclipse. In this paper, we present an identifier authentication mechanism called random visitor, which uses one or more randomly selected peers as delegates of identity proof. Our scheme uses identity-based cryptography and identity ownership proof mechanisms collectively to create multiple, cryptographically protected indirect bindings between two peers, instantly when needed, through the delegates. Because of these bindings, an attacker cannot achieve an identifier forgery related attack against interacting peers without breaking the bindings. Therefore, our mechanism limits the possibility of identifier forgery attacks efficiently by disabling an attacker's ability to break the binding. The design rationale and framework details are presented. A security analysis shows that our scheme is strong enough against identifier related attacks and that the strength increases if there are many peers (more than several thousand) in the network.
Anderson localization for radial tree-like random quantum graphs
NASA Astrophysics Data System (ADS)
Hislop, Peter D.; Post, Olaf
We prove that certain random models associated with radial, tree-like, rooted quantum graphs exhibit Anderson localization at all energies. The two main examples are the random length model (RLM) and the random Kirchhoff model (RKM). In the RLM, the lengths of each generation of edges form a family of independent, identically distributed random variables (iid). For the RKM, the iid random variables are associated with each generation of vertices and moderate the current flow through the vertex. We consider extensions to various families of decorated graphs and prove stability of localization with respect to decoration. In particular, we prove Anderson localization for the random necklace model.
General Exact Solution to the Problem of the Probability Density for Sums of Random Variables
NASA Astrophysics Data System (ADS)
Tribelsky, Michael I.
2002-07-01
The exact explicit expression for the probability density pN(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of pN(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.
General exact solution to the problem of the probability density for sums of random variables.
Tribelsky, Michael I
2002-08-12
The exact explicit expression for the probability density p(N)(x) for a sum of N random, arbitrary correlated summands is obtained. The expression is valid for any number N and any distribution of the random summands. Most attention is paid to application of the developed approach to the case of independent and identically distributed summands. The obtained results reproduce all known exact solutions valid for the, so called, stable distributions of the summands. It is also shown that if the distribution is not stable, the profile of p(N)(x) may be divided into three parts, namely a core (small x), a tail (large x), and a crossover from the core to the tail (moderate x). The quantitative description of all three parts as well as that for the entire profile is obtained. A number of particular examples are considered in detail.
Discrete disorder models for many-body localization
NASA Astrophysics Data System (ADS)
Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub
2018-04-01
Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.
Kumar, Sanjeev; Karmeshu
2018-04-01
A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.
Arbabisarjou, Azizollah; Hajipour, Reza; Sadeghian, Mahdi
2014-08-15
"The correlation between justice and organizational citizenship behavior and organizational identity among the nurses", aimed to correlate different aspects of personal feelings and organizational identity in a population of nurses. The population included all nurses working at hospitals affiliated to administry of health, treatment and medical education in Shahre-Kord (Iran) 2009. A sample consisting of 168 nurses was randomly selected out of the population. The study adopted a descriptive-correlative method. The Organizational Justice Questionnaire (1998), the Organizational Citizenship Questionnaire, and Organizational Identity Questionnaire (1982) were used for gathering data. Data was analyzed through multiple regression analysis. The findings revealed that 4 dimensions of organizational citizenship behavior (altruism, civic virtue, conscientiousness, and self-development) are correlated with organizational identity (R² = 0.612); and loyalty and obedience are correlated with distributional justice (R² = 0.71). Also, loyalty, altruism, and obedience are correlated with procedural justice (R² = 0.69) and loyalty and self-development are correlated with distributional justice (R² = 0.89). A correlation was also detected between interactional justice and organizational identity (R² = 0.89). The findings of the study could serve to identify the factors contributing to the creation and recreation of organizational identity, citizenship behavior and justice among nurses, to promote the performance of the organization, and to achieve organizational goals.
Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B
2003-11-01
The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.
Active Spread-Spectrum Steganalysis for Hidden Data Extraction
2011-09-01
steganalysis. In particular, we aim to recover blindly se- cret data hidden in image hosts via (multi-signature) direct- sequence SS embedding [18]-[25...access (CDMA) communica- tion systems. Under the assumption that the embedded se- cret messages are independent identically distributed (i.i.d.) random
An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses
NASA Technical Reports Server (NTRS)
Lee, Man Hoi; Spergel, David N.
1990-01-01
The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.
Knowledge Discovery from Relations
ERIC Educational Resources Information Center
Guo, Zhen
2010-01-01
A basic and classical assumption in the machine learning research area is "randomness assumption" (also known as i.i.d assumption), which states that data are assumed to be independent and identically generated by some known or unknown distribution. This assumption, which is the foundation of most existing approaches in the literature, simplifies…
Large Deviations: Advanced Probability for Undergrads
ERIC Educational Resources Information Center
Rolls, David A.
2007-01-01
In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…
Generalization of symmetric α-stable Lévy distributions for q >1
NASA Astrophysics Data System (ADS)
Umarov, Sabir; Tsallis, Constantino; Gell-Mann, Murray; Steinberg, Stanly
2010-03-01
The α-stable distributions introduced by Lévy play an important role in probabilistic theoretical studies and their various applications, e.g., in statistical physics, life sciences, and economics. In the present paper we study sequences of long-range dependent random variables whose distributions have asymptotic power-law decay, and which are called (q,α)-stable distributions. These sequences are generalizations of independent and identically distributed α-stable distributions and have not been previously studied. Long-range dependent (q,α)-stable distributions might arise in the description of anomalous processes in nonextensive statistical mechanics, cell biology, finance. The parameter q controls dependence. If q =1 then they are classical independent and identically distributed with α-stable Lévy distributions. In the present paper we establish basic properties of (q,α)-stable distributions and generalize the result of Umarov et al. [Milan J. Math. 76, 307 (2008)], where the particular case α =2,qɛ[1,3) was considered, to the whole range of stability and nonextensivity parameters α ɛ(0,2] and q ɛ[1,3), respectively. We also discuss possible further extensions of the results that we obtain and formulate some conjectures.
Azizollah, Arbabisarjou; Hajipour, Reza; Mahdi, Sadeghian Sourki
2014-01-01
“The correlation between justice and organizational citizenship behavior and organizational identity among the nurses”, aimed to correlate different aspects of personal feelings and organizational identity in a population of nurses. The population included all nurses working at hospitals affiliated to administry of health, treatment and medical education in Shahre-Kord (Iran) 2009. A sample consisting of 168 nurses was randomly selected out of the population. The study adopted a descriptive-correlative method. The Organizational Justice Questionnaire (1998), the Organizational Citizenship Questionnaire, and Organizational Identity Questionnaire (1982) were used for gathering data. Data was analyzed through multiple regression analysis. The findings revealed that 4 dimensions of organizational citizenship behavior (altruism, civic virtue, conscientiousness, and self-development) are correlated with organizational identity (R2 = 0.612); and loyalty and obedience are correlated with distributional justice (R2 = 0.71). Also, loyalty, altruism, and obedience are correlated with procedural justice (R2 = 0.69) and loyalty and self-development are correlated with distributional justice (R2 = 0.89). A correlation was also detected between interactional justice and organizational identity (R2 = 0.89). The findings of the study could serve to identify the factors contributing to the creation and recreation of organizational identity, citizenship behavior and justice among nurses, to promote the performance of the organization, and to achieve organizational goals. PMID:25363122
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
Synchronization of an ensemble of oscillators regulated by their spatial movement.
Sarkar, Sumantra; Parmananda, P
2010-12-01
Synchronization for a collection of oscillators residing in a finite two dimensional plane is explored. The coupling between any two oscillators in this array is unidirectional, viz., master-slave configuration. Initially the oscillators are distributed randomly in space and their autonomous time-periods follow a Gaussian distribution. The duty cycles of these oscillators, which work under an on-off scenario, are normally distributed as well. It is realized that random hopping of oscillators is a necessary condition for observing global synchronization in this ensemble of oscillators. Global synchronization in the context of the present work is defined as the state in which all the oscillators are rendered identical. Furthermore, there exists an optimal amplitude of random hopping for which the attainment of this global synchronization is the fastest. The present work is deemed to be of relevance to the synchronization phenomena exhibited by pulse coupled oscillators such as a collection of fireflies. © 2010 American Institute of Physics.
Collimator of multiple plates with axially aligned identical random arrays of apertures
NASA Technical Reports Server (NTRS)
Hoover, R. B.; Underwood, J. H. (Inventor)
1973-01-01
A collimator is disclosed for examining the spatial location of distant sources of radiation and for imaging by projection, small, near sources of radiation. The collimator consists of a plurality of plates, all of which are pierced with an identical random array of apertures. The plates are mounted perpendicular to a common axis, with like apertures on consecutive plates axially aligned so as to form radiation channels parallel to the common axis. For near sources, the collimator is interposed between the source and a radiation detector and is translated perpendicular to the common axis so as to project radiation traveling parallel to the common axis incident to the detector. For far sources the collimator is scanned by rotating it in elevation and azimuth with a detector to determine the angular distribution of the radiation from the source.
Composition, morphology, and growth of clusters in a gas of particles with random interactions
NASA Astrophysics Data System (ADS)
Azizi, Itay; Rabin, Yitzhak
2018-03-01
We use Langevin dynamics simulations to study the growth kinetics and the steady-state properties of condensed clusters in a dilute two-dimensional system of particles that are all different (APD) in the sense that each particle is characterized by a randomly chosen interaction parameter. The growth exponents, the transition temperatures, and the steady-state properties of the clusters and of the surrounding gas phase are obtained and compared with those of one-component systems. We investigate the fractionation phenomenon, i.e., how particles of different identities are distributed between the coexisting mother (gas) and daughter (clusters) phases. We study the local organization of particles inside clusters, according to their identity—neighbourhood identity ordering (NIO)—and compare the results with those of previous studies of NIO in dense APD systems.
Stability, performance and sensitivity analysis of I.I.D. jump linear systems
NASA Astrophysics Data System (ADS)
Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven
2018-06-01
This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.
Cluster-cluster correlations and constraints on the correlation hierarchy
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.; Gott, J. R., III
1988-01-01
The hypothesis that galaxies cluster around clusters at least as strongly as they cluster around galaxies imposes constraints on the hierarchy of correlation amplitudes in hierachical clustering models. The distributions which saturate these constraints are the Rayleigh-Levy random walk fractals proposed by Mandelbrot; for these fractal distributions cluster-cluster correlations are all identically equal to galaxy-galaxy correlations. If correlation amplitudes exceed the constraints, as is observed, then cluster-cluster correlations must exceed galaxy-galaxy correlations, as is observed.
Imaging Through Random Discrete-Scatterer Dispersive Media
2015-08-27
to that of a conventional, continuous, linear - frequency-modulated chirped signal [3]. Chirped train signals are a particular realization of a class of...continuous chirp signals, characterized by linear frequency modulation [3], we assume the time instances tn to be given by 1 tn = τg ( 1− βg n 2Ng ) n...kernel Dn(z) [9] by sincN (z) = (N + 1)−1DN/2(2πz/N). DISTRIBUTION A: Distribution approved for public release. 4 We use the elementary identity5 π sin
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole
NASA Astrophysics Data System (ADS)
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-01
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
Effect of randomness in logistic maps
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2015-01-01
We study a random logistic map xt+1 = atxt[1 - xt] where at are bounded (q1 ≤ at ≤ q2), random variables independently drawn from a distribution. xt does not show any regular behavior in time. We find that xt shows fully ergodic behavior when the maximum allowed value of at is 4. However
A study of multibiometric traits of identical twins
NASA Astrophysics Data System (ADS)
Sun, Zhenan; Paulino, Alessandra A.; Feng, Jianjiang; Chai, Zhenhua; Tan, Tieniu; Jain, Anil K.
2010-04-01
The increase in twin births has created a requirement for biometric systems to accurately determine the identity of a person who has an identical twin. The discriminability of some of the identical twin biometric traits, such as fingerprints, iris, and palmprints, is supported by anatomy and the formation process of the biometric characteristic, which state they are different even in identical twins due to a number of random factors during the gestation period. For the first time, we collected multiple biometric traits (fingerprint, face, and iris) of 66 families of twins, and we performed unimodal and multimodal matching experiments to assess the ability of biometric systems in distinguishing identical twins. Our experiments show that unimodal finger biometric systems can distinguish two different persons who are not identical twins better than they can distinguish identical twins; this difference is much larger in the face biometric system and it is not significant in the iris biometric system. Multimodal biometric systems that combine different units of the same biometric modality (e.g. multiple fingerprints or left and right irises.) show the best performance among all the unimodal and multimodal biometric systems, achieving an almost perfect separation between genuine and impostor distributions.
Probability distributions for Markov chain based quantum walks
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Does the central limit theorem always apply to phase noise? Some implications for radar problems
NASA Astrophysics Data System (ADS)
Gray, John E.; Addison, Stephen R.
2017-05-01
The phase noise problem or Rayleigh problem occurs in all aspects of radar. It is an effect that a radar engineer or physicist always has to take into account as part of a design or in attempt to characterize the physics of a problem such as reverberation. Normally, the mathematical difficulties of phase noise characterization are avoided by assuming the phase noise probability distribution function (PDF) is uniformly distributed, and the Central Limit Theorem (CLT) is invoked to argue that the superposition of relatively few random components obey the CLT and hence the superposition can be treated as a normal distribution. By formalizing the characterization of phase noise (see Gray and Alouani) for an individual random variable, the summation of identically distributed random variables is the product of multiple characteristic functions (CF). The product of the CFs for phase noise has a CF that can be analyzed to understand the limitations CLT when applied to phase noise. We mirror Kolmogorov's original proof as discussed in Papoulis to show the CLT can break down for receivers that gather limited amounts of data as well as the circumstances under which it can fail for certain phase noise distributions. We then discuss the consequences of this for matched filter design as well the implications for some physics problems.
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-05
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
NASA Astrophysics Data System (ADS)
Braides, Andrea; Causin, Andrea; Piatnitski, Andrey; Solci, Margherita
2018-06-01
We consider randomly distributed mixtures of bonds of ferromagnetic and antiferromagnetic type in a two-dimensional square lattice with probability 1-p and p, respectively, according to an i.i.d. random variable. We study minimizers of the corresponding nearest-neighbour spin energy on large domains in Z^2. We prove that there exists p_0 such that for p≤ p_0 such minimizers are characterized by a majority phase; i.e., they take identically the value 1 or - 1 except for small disconnected sets. A deterministic analogue is also proved.
NASA Astrophysics Data System (ADS)
Braides, Andrea; Causin, Andrea; Piatnitski, Andrey; Solci, Margherita
2018-04-01
We consider randomly distributed mixtures of bonds of ferromagnetic and antiferromagnetic type in a two-dimensional square lattice with probability 1-p and p, respectively, according to an i.i.d. random variable. We study minimizers of the corresponding nearest-neighbour spin energy on large domains in Z^2 . We prove that there exists p_0 such that for p≤p_0 such minimizers are characterized by a majority phase; i.e., they take identically the value 1 or - 1 except for small disconnected sets. A deterministic analogue is also proved.
Antipersistent dynamics in kinetic models of wealth exchange
NASA Astrophysics Data System (ADS)
Goswami, Sanchari; Chatterjee, Arnab; Sen, Parongama
2011-11-01
We investigate the detailed dynamics of gains and losses made by agents in some kinetic models of wealth exchange. An earlier work suggested that a walk in an abstract gain-loss space can be conceived for the agents. For models in which agents do not save, or save with uniform saving propensity, the walk has diffusive behavior. For the case in which the saving propensity λ is distributed randomly (0≤λ<1), the resultant walk showed a ballistic nature (except at a particular value of λ*≈0.47). Here we consider several other features of the walk with random λ. While some macroscopic properties of this walk are comparable to a biased random walk, at microscopic level, there are gross differences. The difference turns out to be due to an antipersistent tendency toward making a gain (loss) immediately after making a loss (gain). This correlation is in fact present in kinetic models without saving or with uniform saving as well, such that the corresponding walks are not identical to ordinary random walks. In the distributed saving case, antipersistence occurs with a simultaneous overall bias.
Upscaling of spectral induced polarization response using random tube networks
NASA Astrophysics Data System (ADS)
Maineult, Alexis; Revil, André; Camerlynck, Christian; Florsch, Nicolas; Titov, Konstantin
2017-05-01
In order to upscale the induced polarization (IP) response of porous media, from the pore scale to the sample scale, we implement a procedure to compute the macroscopic complex resistivity response of random tube networks. A network is made of a 2-D square-meshed grid of connected tubes, which obey to a given tube radius distribution. In a simplified approach, the electrical impedance of each tube follows a local Pelton resistivity model, with identical resistivity, chargeability and Cole-Cole exponent values for all the tubes-only the time constant varies, as it depends on the radius of each tube and on a diffusion coefficient also identical for all the tubes. By solving the conservation law for the electrical charge, the macroscopic IP response of the network is obtained. We fit successfully the macroscopic complex resistivity also by a Pelton resistivity model. Simulations on uncorrelated and correlated networks, for which the tube radius distribution is so that the decimal logarithm of the radius is normally distributed, evidence that the local and macroscopic model parameters are the same, except the Cole-Cole exponent: its macroscopic value diminishes with increasing heterogeneity (i.e. with increasing standard deviation of the radius distribution), compared to its local value. The methodology is also applied to six siliciclastic rock samples, for which the pore radius distributions from mercury porosimetry are available. These samples exhibit the same behaviour as synthetic media, that is, the macroscopic Cole-Cole exponent is always lower than the local one. As a conclusion, the pore network method seems to be a promising tool for studying the upscaling of the IP response of porous media.
Multiple Scattering in Random Mechanical Systems and Diffusion Approximation
NASA Astrophysics Data System (ADS)
Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun
2013-10-01
This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.
MacWilliams Identity for M-Spotty Weight Enumerator
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Fujiwara, Eiji
M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.
Characterization of intermittency in renewal processes: Application to earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji
2010-03-15
We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less
An analysis of random projection for changeable and privacy-preserving biometric verification.
Wang, Yongjin; Plataniotis, Konstantinos N
2010-10-01
Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.
Langhout, Regina Day; Kohfeldt, Danielle M; Ellison, Erin Rose
2011-12-01
The current study examines 16 Latina/o fifth grade children's desires for a decision-making structure within a youth participatory action research (yPAR) program. When given the choices of consensus, majority rule, authoritarian rule, delegation, and random choice models, children chose random choice. Procedural, distributive and emotional justice were heavily weighted in their reasoning around fairness and decision making. Many thought random choice offered the best alternative because it flattened power hierarchies so that each child would, at some point, have the power to make a decision. Additionally, children argued that the neutrality of random choice allowed them to sidestep interpersonal tensions. Implications include how social identities inform definitions of fairness and how yPAR programs should work with youth around how they will make decisions.
Stochastic analysis of particle movement over a dune bed
Lee, Baum K.; Jobson, Harvey E.
1977-01-01
Stochastic models are available that can be used to predict the transport and dispersion of bed-material sediment particles in an alluvial channel. These models are based on the proposition that the movement of a single bed-material sediment particle consists of a series of steps of random length separated by rest periods of random duration and, therefore, application of the models requires a knowledge of the probability distributions of the step lengths, the rest periods, the elevation of particle deposition, and the elevation of particle erosion. The procedure was tested by determining distributions from bed profiles formed in a large laboratory flume with a coarse sand as the bed material. The elevation of particle deposition and the elevation of particle erosion can be considered to be identically distributed, and their distribution can be described by either a ' truncated Gaussian ' or a ' triangular ' density function. The conditional probability distribution of the rest period given the elevation of particle deposition closely followed the two-parameter gamma distribution. The conditional probability distribution of the step length given the elevation of particle erosion and the elevation of particle deposition also closely followed the two-parameter gamma density function. For a given flow, the scale and shape parameters describing the gamma probability distributions can be expressed as functions of bed-elevation. (Woodard-USGS)
Non-Fickian dispersion of groundwater age
Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.
2014-01-01
We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651
Controllability of Deterministic Networks with the Identical Degree Sequence
Ma, Xiujuan; Zhao, Haixing; Wang, Binghong
2015-01-01
Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920
Wilson, William G.; Lundberg, Per
2004-01-01
Theoretical interest in the distributions of species abundances observed in ecological communities has focused recently on the results of models that assume all species are identical in their interactions with one another, and rely upon immigration and speciation to promote coexistence. Here we examine a one-trophic level system with generalized species interactions, including species-specific intraspecific and interspecific interaction strengths, and density-independent immigration from a regional species pool. Comparisons between results from numerical integrations and an approximate analytic calculation for random communities demonstrate good agreement, and both approaches yield abundance distributions of nearly arbitrary shape, including bimodality for intermediate immigration rates. PMID:15347523
Wilson, William G; Lundberg, Per
2004-09-22
Theoretical interest in the distributions of species abundances observed in ecological communities has focused recently on the results of models that assume all species are identical in their interactions with one another, and rely upon immigration and speciation to promote coexistence. Here we examine a one-trophic level system with generalized species interactions, including species-specific intraspecific and interspecific interaction strengths, and density-independent immigration from a regional species pool. Comparisons between results from numerical integrations and an approximate analytic calculation for random communities demonstrate good agreement, and both approaches yield abundance distributions of nearly arbitrary shape, including bimodality for intermediate immigration rates.
The Identity Mapping Project: Demographic differences in patterns of distributed identity.
Gilbert, Richard L; Dionisio, John David N; Forney, Andrew; Dorin, Philip
2015-01-01
The advent of cloud computing and a multi-platform digital environment is giving rise to a new phase of human identity called "The Distributed Self." In this conception, aspects of the self are distributed into a variety of 2D and 3D digital personas with the capacity to reflect any number of combinations of now malleable personality traits. In this way, the source of human identity remains internal and embodied, but the expression or enactment of the self becomes increasingly external, disembodied, and distributed on demand. The Identity Mapping Project (IMP) is an interdisciplinary collaboration between psychology and computer Science designed to empirically investigate the development of distributed forms of identity. Methodologically, it collects a large database of "identity maps" - computerized graphical representations of how active someone is online and how their identity is expressed and distributed across 7 core digital domains: email, blogs/personal websites, social networks, online forums, online dating sites, character based digital games, and virtual worlds. The current paper reports on gender and age differences in online identity based on an initial database of distributed identity profiles.
Learning Bayesian Networks from Correlated Data
NASA Astrophysics Data System (ADS)
Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola
2016-05-01
Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
NASA Astrophysics Data System (ADS)
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
NASA Astrophysics Data System (ADS)
Massip, Florian; Arndt, Peter F.
2013-04-01
Recently, an enrichment of identical matching sequences has been found in many eukaryotic genomes. Their length distribution exhibits a power law tail raising the question of what evolutionary mechanism or functional constraints would be able to shape this distribution. Here we introduce a simple and evolutionarily neutral model, which involves only point mutations and segmental duplications, and produces the same statistical features as observed for genomic data. Further, we extend a mathematical model for random stick breaking to analytically show that the exponent of the power law tail is -3 and universal as it does not depend on the microscopic details of the model.
A Survey of Mathematical Programming in the Soviet Union (Bibliography),
1982-01-01
ASTAFYEV, N. N., "METHOD OF LINEARIZATION IN CONVEX PROGRAMMING", TR4- Y ZIMN SHKOLY PO MAT PROGRAMMIR I XMEZHN VOPR DROGOBYCH, 72, VOL. 3, 54-73 2...AKADEMIYA KOMMUNLN’NOGO KHOZYAYSTVA (MOSCOW), 72, NO. 93, 70-77 19. GIMELFARB , G, V. MARCHENKO, V. RYBAK, "AUTOMATIC IDENTIFICATION OF IDENTICAL POINTS...DYNAMIC PROGRAMMING (CONTINUED) 25. KOLOSOV, G. Y , "ON ANALYTICAL SOLUTION OF DESIGN PROBLEMS FOR DISTRIBUTED OPTIMAL CONTROL SYSTEMS SUBJECTED TO RANDOM
Hu, Bin; Yang, Guohua; Zhao, Weixing; Zhang, Yingjiao; Zhao, Jindong
2007-03-01
MreB is a bacterial actin that plays important roles in determination of cell shape and chromosome partitioning in Escherichia coli and Caulobacter crescentus. In this study, the mreB from the filamentous cyanobacterium Anabaena sp. PCC 7120 was inactivated. Although the mreB null mutant showed a drastic change in cell shape, its growth rate, cell division and the filament length were unaltered. Thus, MreB in Anabaena maintains cell shape but is not required for chromosome partitioning. The wild type and the mutant had eight and 10 copies of chromosomes per cell respectively. We demonstrated that DNA content in two daughter cells after cell division in both strains was not always identical. The ratios of DNA content in two daughter cells had a Gaussian distribution with a standard deviation much larger than a value expected if the DNA content in two daughter cells were identical, suggesting that chromosome partitioning is a random process. The multiple copies of chromosomes in cyanobacteria are likely required for chromosome random partitioning in cell division.
Computer simulation of backscattering spectra from paint
NASA Astrophysics Data System (ADS)
Mayer, M.; Silva, T. F.
2017-09-01
To study the role of lateral non-homogeneity on backscattering analysis of paintings, a simplified model of paint consisting of randomly distributed spherical pigment particles embedded in oil/binder has been developed. Backscattering spectra for lead white pigment particles in linseed oil have been calculated for 3 MeV H+ at a scattering angle of 165° for pigment volume concentrations ranging from 30 vol.% to 70 vol.% using the program STRUCTNRA. For identical pigment volume concentrations the heights and shapes of the backscattering spectra depend on the diameter of the pigment particles: This is a structural ambiguity for identical mean atomic concentrations but different lateral arrangement of materials. Only for very small pigment particles the resulting spectra are close to spectra calculated supposing atomic mixing and assuming identical concentrations of all elements. Generally, a good fit can be achieved when evaluating spectra from structured materials assuming atomic mixing of all elements and laterally homogeneous depth distributions. However, the derived depth profiles are inaccurate by a factor of up to 3. The depth range affected by this structural ambiguity ranges from the surface to a depth of roughly 0.5-1 pigment particle diameters. Accurate quantitative evaluation of backscattering spectra from paintings therefore requires taking the correct microstructure of the paint layer into account.
Feather, N T; Souter, Jacqueline
2002-08-01
This study investigated the responses of 181 participants (87 men, 94 women), from Adelaide, South Australia, to scenarios describing mandatory sentences for perpetrators of a property offense committed in the Northern Territory, Australia. Four scenarios that were randomly distributed varied ethnic identity (White Australian, Aboriginal Australian) and criminal history (first-time offender, third-time offender). Participants completed attitude measures for both mandatory sentencing and capital punishment, a right-wing authoritarianism scale, and a scale concerned with sentencing goals (retribution, deterrence, protection of society, and rehabilitation). Results showed strong effects of attitude toward mandatory sentencing on scenario responses for variables such as perceived responsibility, deservingness, leniency, seriousness, anger and pleasure, and weaker effects of ethnic identity and criminal history. Participants were generally more sympathetic when the offender was an Aboriginal Australian. Results of a multiple regression analysis showed that attitude toward mandatory sentence was predicted by right-wing authoritarianism and by sentencing goals relating to deterrence and the protection of society.
Record statistics of a strongly correlated time series: random walks and Lévy flights
NASA Astrophysics Data System (ADS)
Godrèche, Claude; Majumdar, Satya N.; Schehr, Grégory
2017-08-01
We review recent advances on the record statistics of strongly correlated time series, whose entries denote the positions of a random walk or a Lévy flight on a line. After a brief survey of the theory of records for independent and identically distributed random variables, we focus on random walks. During the last few years, it was indeed realized that random walks are a very useful ‘laboratory’ to test the effects of correlations on the record statistics. We start with the simple one-dimensional random walk with symmetric jumps (both continuous and discrete) and discuss in detail the statistics of the number of records, as well as of the ages of the records, i.e. the lapses of time between two successive record breaking events. Then we review the results that were obtained for a wide variety of random walk models, including random walks with a linear drift, continuous time random walks, constrained random walks (like the random walk bridge) and the case of multiple independent random walkers. Finally, we discuss further observables related to records, like the record increments, as well as some questions raised by physical applications of record statistics, like the effects of measurement error and noise.
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Notes on SAW Tag Interrogation Techniques
NASA Technical Reports Server (NTRS)
Barton, Richard J.
2010-01-01
We consider the problem of interrogating a single SAW RFID tag with a known ID and known range in the presence of multiple interfering tags under the following assumptions: (1) The RF propagation environment is well approximated as a simple delay channel with geometric power-decay constant alpha >/= 2. (2) The interfering tag IDs are unknown but well approximated as independent, identically distributed random samples from a probability distribution of tag ID waveforms with known second-order properties, and the tag of interest is drawn independently from the same distribution. (3) The ranges of the interfering tags are unknown but well approximated as independent, identically distributed realizations of a random variable rho with a known probability distribution f(sub rho) , and the tag ranges are independent of the tag ID waveforms. In particular, we model the tag waveforms as random impulse responses from a wide-sense-stationary, uncorrelated-scattering (WSSUS) fading channel with known bandwidth and scattering function. A brief discussion of the properties of such channels and the notation used to describe them in this document is given in the Appendix. Under these assumptions, we derive the expression for the output signal-to-noise ratio (SNR) for an arbitrary combination of transmitted interrogation signal and linear receiver filter. Based on this expression, we derive the optimal interrogator configuration (i.e., transmitted signal/receiver filter combination) in the two extreme noise/interference regimes, i.e., noise-limited and interference-limited, under the additional assumption that the coherence bandwidth of the tags is much smaller than the total tag bandwidth. Finally, we evaluate the performance of both optimal interrogators over a broad range of operating scenarios using both numerical simulation based on the assumed model and Monte Carlo simulation based on a small sample of measured tag waveforms. The performance evaluation results not only provide guidelines for proper interrogator design, but also provide some insight on the validity of the assumed signal model. It should be noted that the assumption that the impulse response of the tag of interest is known precisely implies that the temperature and range of the tag are also known precisely, which is generally not the case in practice. However, analyzing interrogator performance under this simplifying assumption is much more straightforward and still provides a great deal of insight into the nature of the problem.
Olson, Gordon Lee
2016-12-06
Here, gray and multigroup radiation is transported through 3D media consisting of spheres randomly placed in a uniform background. Comparisons are made between using constant radii spheres and three different distributions of sphere radii. Because of the computational cost of 3D calculations, only the lowest angle order, n=1, is tested. If the mean chord length is held constant, using different radii distributions makes little difference. This is true for both gray and multigroup solutions. 3D transport solutions are compared to 2D and 1D solutions with the same mean chord lengths. 2D disk and 3D sphere media give solutions that aremore » nearly identical while 1D slab solutions are fundamentally different.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olson, Gordon Lee
Here, gray and multigroup radiation is transported through 3D media consisting of spheres randomly placed in a uniform background. Comparisons are made between using constant radii spheres and three different distributions of sphere radii. Because of the computational cost of 3D calculations, only the lowest angle order, n=1, is tested. If the mean chord length is held constant, using different radii distributions makes little difference. This is true for both gray and multigroup solutions. 3D transport solutions are compared to 2D and 1D solutions with the same mean chord lengths. 2D disk and 3D sphere media give solutions that aremore » nearly identical while 1D slab solutions are fundamentally different.« less
NASA Astrophysics Data System (ADS)
Yeung, L.
2015-12-01
I present a mode of isotopic ordering that has purely combinatorial origins. It can be important when identical rare isotopes are paired by coincidence (e.g., they are neighbors on the same molecule), or when extrinsic factors govern the isotopic composition of the two atoms that share a chemical bond. By itself, combinatorial isotope pairing yields products with isotopes either randomly distributed or with a deficit relative to a random distribution of isotopes. These systematics arise because of an unconventional coupling between the formation of singly- and multiply-substituted isotopic moieties. In a random distribution, rare isotopes are symmetrically distributed: Single isotopic substitutions (e.g., H‒D and D‒H in H2) occur with equal probability, and double isotopic substitutions (e.g., D2) occur according to random chance. The absence of symmetry in a bond-making complex can yield unequal numbers of singly-substituted molecules (e.g., more H‒D than D‒H in H2), which is recorded in the product molecule as a deficit in doubly-substituted moieties and an "anticlumped" isotope distribution (i.e., Δn < 0). Enzymatic isotope pairing reactions, which can have site-specific isotopic fractionation factors and atom reservoirs, should express this class of combinatorial isotope effect. Chemical-kinetic isotope effects, which are related to the bond-forming transition state, arise independently and express second-order combinatorial effects. In general, both combinatorial and chemical factors are important for calculating and interpreting clumped-isotope signatures of individual reactions. In many reactions relevant to geochemical oxygen, carbon, and nitrogen cycling, combinatorial isotope pairing likely plays a strong role in the clumped isotope distribution of the products. These isotopic signatures, manifest as either directly bound isotope clumps or as features of a molecule's isotopic anatomy, could be exploited as tracers of biogeochemistry that can relate molecular mechanisms to signals observable at environmentally relevant spatial scales.
Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L
2006-12-01
Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.
Sequential Computerized Mastery Tests--Three Simulation Studies
ERIC Educational Resources Information Center
Wiberg, Marie
2006-01-01
A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
NASA Technical Reports Server (NTRS)
Hooke, F. H.
1972-01-01
Both the conventional and reliability analyses for determining safe fatigue life are predicted on a population having a specified (usually log normal) distribution of life to collapse under a fatigue test load. Under a random service load spectrum, random occurrences of load larger than the fatigue test load may confront and cause collapse of structures which are weakened, though not yet to the fatigue test load. These collapses are included in reliability but excluded in conventional analysis. The theory of risk determination by each method is given, and several reasonably typical examples have been worked out, in which it transpires that if one excludes collapse through exceedance of the uncracked strength, the reliability and conventional analyses gave virtually identical probabilities of failure or survival.
Finite-time scaling at the Anderson transition for vibrations in solids
NASA Astrophysics Data System (ADS)
Beltukov, Y. M.; Skipetrov, S. E.
2017-11-01
A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.
ERIC Educational Resources Information Center
Balkin, Richard S.; Schlosser, Lewis Z.; Levitt, Dana Heller
2009-01-01
In this article, the authors present the results from a national study investigating the relationships between religious identity, sexism, homophobia, and multicultural competence. Participants were 111 randomly sampled counseling professionals and graduate students. The results indicated a relationship between religious identity and various…
The living Drake equation of the Tau Zero Foundation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-03-01
The living Drake equation is our statistical generalization of the Drake equation such that it can take into account any number of factors. This new result opens up the possibility to enrich the equation by inserting more new factors as long as the scientific learning increases. The adjective "Living" refers just to this continuous enrichment of the Drake equation and is the goal of a new research project that the Tau Zero Foundation has entrusted to this author as the discoverer of the statistical Drake equation described hereafter. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the lognormal distribution. Then, the mean value, standard deviation, mode, median and all the moments of this lognormal N can be derived from the means and standard deviations of the seven input random variables. In fact, the seven factors in the ordinary Drake equation now become seven independent positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) distance between any two neighbouring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, this distance now becomes a new random variable. We derive the relevant probability density function, apparently previously unknown (dubbed "Maccone distribution" by Paul Davies). Data Enrichment Principle. It should be noticed that any positive number of random variables in the statistical Drake equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation we call the "Data Enrichment Principle", and regard as the key to more profound, future results in Astrobiology and SETI.
Quantum-Classical Hybrid for Information Processing
NASA Technical Reports Server (NTRS)
Zak, Michail
2011-01-01
Based upon quantum-inspired entanglement in quantum-classical hybrids, a simple algorithm for instantaneous transmissions of non-intentional messages (chosen at random) to remote distances is proposed. The idea is to implement instantaneous transmission of conditional information on remote distances via a quantum-classical hybrid that preserves superposition of random solutions, while allowing one to measure its state variables using classical methods. Such a hybrid system reinforces the advantages, and minimizes the limitations, of both quantum and classical characteristics. Consider n observers, and assume that each of them gets a copy of the system and runs it separately. Although they run identical systems, the outcomes of even synchronized runs may be different because the solutions of these systems are random. However, the global constrain must be satisfied. Therefore, if the observer #1 (the sender) made a measurement of the acceleration v(sub 1) at t =T, then the receiver, by measuring the corresponding acceleration v(sub 1) at t =T, may get a wrong value because the accelerations are random, and only their ratios are deterministic. Obviously, the transmission of this knowledge is instantaneous as soon as the measurements have been performed. In addition to that, the distance between the observers is irrelevant because the x-coordinate does not enter the governing equations. However, the Shannon information transmitted is zero. None of the senders can control the outcomes of their measurements because they are random. The senders cannot transmit intentional messages. Nevertheless, based on the transmitted knowledge, they can coordinate their actions based on conditional information. If the observer #1 knows his own measurements, the measurements of the others can be fully determined. It is important to emphasize that the origin of entanglement of all the observers is the joint probability density that couples their actions. There is no centralized source, or a sender of the signal, because each receiver can become a sender as well. An observer receives a signal by performing certain measurements synchronized with the measurements of the others. This means that the signal is uniformly and simultaneously distributed over the observers in a decentralized way. The signals transmit no intentional information that would favor one agent over another. All the sequence of signals received by different observers are not only statistically equivalent, but are also point-by-point identical. It is important to assume that each agent knows that the other agent simultaneously receives the identical signals. The sequences of the signals are true random, so that no agent could predict the next step with the probability different from those described by the density. Under these quite general assumptions, the entangled observers-agents can perform non-trivial tasks that include transmission of conditional information from one agent to another, simple paradigm of cooperation, etc. The problem of behavior of intelligent agents correlated by identical random messages in a decentralized way has its own significance: it simulates evolutionary behavior of biological and social systems correlated only via simultaneous sensoring sequences of unexpected events.
Properties of networks with partially structured and partially random connectivity
NASA Astrophysics Data System (ADS)
Ahmadian, Yashar; Fumarola, Francesco; Miller, Kenneth D.
2015-01-01
Networks studied in many disciplines, including neuroscience and mathematical biology, have connectivity that may be stochastic about some underlying mean connectivity represented by a non-normal matrix. Furthermore, the stochasticity may not be independent and identically distributed (iid) across elements of the connectivity matrix. More generally, the problem of understanding the behavior of stochastic matrices with nontrivial mean structure and correlations arises in many settings. We address this by characterizing large random N ×N matrices of the form A =M +L J R , where M ,L , and R are arbitrary deterministic matrices and J is a random matrix of zero-mean iid elements. M can be non-normal, and L and R allow correlations that have separable dependence on row and column indices. We first provide a general formula for the eigenvalue density of A . For A non-normal, the eigenvalues do not suffice to specify the dynamics induced by A , so we also provide general formulas for the transient evolution of the magnitude of activity and frequency power spectrum in an N -dimensional linear dynamical system with a coupling matrix given by A . These quantities can also be thought of as characterizing the stability and the magnitude of the linear response of a nonlinear network to small perturbations about a fixed point. We derive these formulas and work them out analytically for some examples of M ,L , and R motivated by neurobiological models. We also argue that the persistence as N →∞ of a finite number of randomly distributed outlying eigenvalues outside the support of the eigenvalue density of A , as previously observed, arises in regions of the complex plane Ω where there are nonzero singular values of L-1(z 1 -M ) R-1 (for z ∈Ω ) that vanish as N →∞ . When such singular values do not exist and L and R are equal to the identity, there is a correspondence in the normalized Frobenius norm (but not in the operator norm) between the support of the spectrum of A for J of norm σ and the σ pseudospectrum of M .
Emergence of patterns in random processes
NASA Astrophysics Data System (ADS)
Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.
2012-08-01
Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.
Moghadam, Fatemeh Velayati; Majidinia, Sara; Chasteen, Joseph; Ghavamnasiri, Marjaneh
2013-01-01
Aim: The purpose of the present randomized clinical trial was to evaluate the color change, rebound effect and sensitivity of at-home bleaching with 15% carbamide peroxide and power bleaching using 38% hydrogen peroxide. Materials and Methods: For bleaching techniques, 20 subjects were randomized in a split mouth design (at-home and power bleaching): In maxillary and mandibular anterior teeth (n = 20). Color was recorded before bleaching, immediately after bleaching, at 2 weeks, 1, 3 and 6 month intervals. Tooth sensitivity was recorded using the visual analog scale. The Mann-Whitney test was used to compare both groups regarding bleaching effectiveness (ΔE1), rebound effect (ΔE2) and color difference between the rebounded tooth color and unbleached teeth (ΔE3) while the Wilcoxon compared ΔE within each group. Distribution of sensitivity was evaluated using the Chi-square test (α =0.05). Results: There was no significant difference between groups regarding ΔE1 and ΔE3 (P > 0.05). Even though, ΔE2 showed no significant difference between groups after bleaching as well as at 2 week, 1 month and 3 month follow-up periods (P > 0.05). Although, significant difference was found in ΔE2 (P < 0.05 Mann-Whitney) between two methods after 6 months and a high degree of rebound effect was obtained with power bleaching. Within each group, there was no significant difference between ΔE1 and ΔE3 (P < 0.05 Wilcoxon). The distribution of sensitivity was identical with both techniques (P > 0.05). Conclusion: Bleaching techniques resulted in identical tooth whitening and post-operative sensitivity using both techniques, but faster color regression was found with power bleaching even though color regression to the baseline of the teeth in both groups was the same after 6 months. PMID:24932113
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Personal homepage construction as an expression of social development.
Schmitt, Kelly L; Dayanim, Shoshana; Matthias, Stacey
2008-03-01
In 2 studies, the authors explored preadolescent and adolescent use of personal homepages in relation to mastery and identity formation. In Study 1, the authors attempted to determine the prevalence of personal homepage and online journal (blog) construction among a random sample (N=500) of preadolescents and adolescents. Adolescents were more likely to create personal homepages or blogs than preadolescents. Creation was related to feelings of mastery, expressions of identity, and a means to socialize. In Study 2, the authors explored the relationship of homepages to mastery and identity formation by content analysis of a random sample of homepages. Results suggest children use personal homepages to express and explore their forming identities.
Exact extreme-value statistics at mixed-order transitions.
Bar, Amir; Majumdar, Satya N; Schehr, Grégory; Mukamel, David
2016-05-01
We study extreme-value statistics for spatially extended models exhibiting mixed-order phase transitions (MOT). These are phase transitions that exhibit features common to both first-order (discontinuity of the order parameter) and second-order (diverging correlation length) transitions. We consider here the truncated inverse distance squared Ising model, which is a prototypical model exhibiting MOT, and study analytically the extreme-value statistics of the domain lengths The lengths of the domains are identically distributed random variables except for the global constraint that their sum equals the total system size L. In addition, the number of such domains is also a fluctuating variable, and not fixed. In the paramagnetic phase, we show that the distribution of the largest domain length l_{max} converges, in the large L limit, to a Gumbel distribution. However, at the critical point (for a certain range of parameters) and in the ferromagnetic phase, we show that the fluctuations of l_{max} are governed by novel distributions, which we compute exactly. Our main analytical results are verified by numerical simulations.
Authenticated Quantum Key Distribution with Collective Detection using Single Photons
NASA Astrophysics Data System (ADS)
Huang, Wei; Xu, Bing-Jie; Duan, Ji-Tong; Liu, Bin; Su, Qi; He, Yuan-Hang; Jia, Heng-Yue
2016-10-01
We present two authenticated quantum key distribution (AQKD) protocols by utilizing the idea of collective (eavesdropping) detection. One is a two-party AQKD protocol, the other is a multiparty AQKD protocol with star network topology. In these protocols, the classical channels need not be assumed to be authenticated and the single photons are used as the quantum information carriers. To achieve mutual identity authentication and establish a random key in each of the proposed protocols, only one participant should be capable of preparing and measuring single photons, and the main quantum ability that the rest of the participants should have is just performing certain unitary operations. Security analysis shows that these protocols are free from various kinds of attacks, especially the impersonation attack and the man-in-the-middle (MITM) attack.
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
Rebolo-Ifrán, Natalia; Carrete, Martina; Sanz-Aguilar, Ana; Rodríguez-Martínez, Sol; Cabezas, Sonia; Marchant, Tracy A.; Bortolotti, Gary R.; Tella, José L.
2015-01-01
Urban endocrine ecology aims to understand how organisms cope with new sources of stress and maintain allostatic load to thrive in an increasingly urbanized world. Recent research efforts have yielded controversial results based on short-term measures of stress, without exploring its fitness effects. We measured feather corticosterone (CORTf, reflecting the duration and amplitude of glucocorticoid secretion over several weeks) and subsequent annual survival in urban and rural burrowing owls. This species shows high individual consistency in fear of humans (i.e., flight initiation distance, FID), allowing us to hypothesize that individuals distribute among habitats according to their tolerance to human disturbance. FIDs were shorter in urban than in rural birds, but CORTf levels did not differ, nor were correlated to FIDs. Survival was twice as high in urban as in rural birds and links with CORTf varied between habitats: while a quadratic relationship supports stabilizing selection in urban birds, high predation rates may have masked CORTf-survival relationship in rural ones. These results evidence that urban life does not constitute an additional source of stress for urban individuals, as shown by their near identical CORTf values compared with rural conspecifics supporting the non-random distribution of individuals among habitats according to their behavioural phenotypes. PMID:26348294
Rebolo-Ifrán, Natalia; Carrete, Martina; Sanz-Aguilar, Ana; Rodríguez-Martínez, Sol; Cabezas, Sonia; Marchant, Tracy A; Bortolotti, Gary R; Tella, José L
2015-09-08
Urban endocrine ecology aims to understand how organisms cope with new sources of stress and maintain allostatic load to thrive in an increasingly urbanized world. Recent research efforts have yielded controversial results based on short-term measures of stress, without exploring its fitness effects. We measured feather corticosterone (CORTf, reflecting the duration and amplitude of glucocorticoid secretion over several weeks) and subsequent annual survival in urban and rural burrowing owls. This species shows high individual consistency in fear of humans (i.e., flight initiation distance, FID), allowing us to hypothesize that individuals distribute among habitats according to their tolerance to human disturbance. FIDs were shorter in urban than in rural birds, but CORTf levels did not differ, nor were correlated to FIDs. Survival was twice as high in urban as in rural birds and links with CORTf varied between habitats: while a quadratic relationship supports stabilizing selection in urban birds, high predation rates may have masked CORTf-survival relationship in rural ones. These results evidence that urban life does not constitute an additional source of stress for urban individuals, as shown by their near identical CORTf values compared with rural conspecifics supporting the non-random distribution of individuals among habitats according to their behavioural phenotypes.
Isonymy structure of four Venezuelan states.
Rodríguez-Larralde, A; Barrai, I; Alfonzo, J C
1993-01-01
The isonymy structure of four Venezuelan States-Falcón, Mérida, Nueva Esparta and Yaracuy-was studied using the surnames of the Venezuelan register of electors updated in 1984. The surname distributions of 155 counties were obtained and, for each county, estimates of consanguinity due to random isonymy and Fisher's alpha were calculated. It was shown that for large sample sizes the inverse of Fisher's alpha is identical to the unbiased estimate of within-population random isonymy. A three-dimensional isometric surface plot was obtained for each State, based on the counties' random isonymy estimates. The highest estimates of random consanguinity were found in the States of Nueva Esparta and Mérida, while the lowest were found in Yaracuy. Other microdifferentiation indicators from the same data gave similar results, and an interpretation was attempted, based on the particular economic and geographic characteristics of each State. Four different genetic distances between all possible pairs of counties were calculated within States; geographic distance shows the highest correlations with random isonymy and Euclidean distance, with the exception of the State of Nueva Esparta, where there is no correlation between geographic distance and random isonymy. It was possible to group counties in clusters, from dendrograms based on Euclidean distance. Isonymy clustering was also consistent with socioeconomic and geographic characteristics of the counties.
Conditional random matrix ensembles and the stability of dynamical systems
NASA Astrophysics Data System (ADS)
Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.
2015-08-01
Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.
Ponzi, Adam; Wickens, Jeff
2010-04-28
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Bober, David B.; Kumar, Mukal; Rupert, Timothy J.; ...
2015-12-28
Nanocrystalline materials are defined by their fine grain size, but details of the grain boundary character distribution should also be important. Grain boundary character distributions are reported for ball-milled, sputter-deposited, and electrodeposited Ni and Ni-based alloys, all with average grain sizes of ~20 nm, to study the influence of processing route. The two deposited materials had nearly identical grain boundary character distributions, both marked by a Σ3 length percentage of 23 to 25 pct. In contrast, the ball-milled material had only 3 pct Σ3-type grain boundaries and a large fraction of low-angle boundaries (16 pct), with the remainder being predominantlymore » random high angle (73 pct). Furthermore, these grain boundary character measurements are connected to the physical events that control their respective processing routes. Consequences for material properties are also discussed with a focus on nanocrystalline corrosion. As a whole, the results presented here show that grain boundary character distribution, which has often been overlooked in nanocrystalline metals, can vary significantly and influence material properties in profound ways.« less
Ouwens, Mario; Hauch, Ole; Franzén, Stefan
2018-05-01
The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.
Fingerprint Recognition with Identical Twin Fingerprints
Yang, Xin; Tian, Jie
2012-01-01
Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6) images. Compared to the previous work, our contributions are summarized as follows: (1) Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2) Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3) A larger sample (83 pairs) was collected. (4) A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5) A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a) A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b) The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c) For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d) For each of four fingers of identical twins, the probability of having same fingerprint type is similar. PMID:22558204
Fingerprint recognition with identical twin fingerprints.
Tao, Xunqiang; Chen, Xinjian; Yang, Xin; Tian, Jie
2012-01-01
Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6) images. Compared to the previous work, our contributions are summarized as follows: (1) Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2) Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3) A larger sample (83 pairs) was collected. (4) A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5) A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a) A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b) The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c) For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d) For each of four fingers of identical twins, the probability of having same fingerprint type is similar.
NASA Astrophysics Data System (ADS)
Shemer, L.; Sergeeva, A.
2009-12-01
The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.
Random deposition of particles of different sizes.
Forgerini, F L; Figueiredo, W
2009-04-01
We study the surface growth generated by the random deposition of particles of different sizes. A model is proposed where the particles are aggregated on an initially flat surface, giving rise to a rough interface and a porous bulk. By using Monte Carlo simulations, a surface has grown by adding particles of different sizes, as well as identical particles on the substrate in (1+1) dimensions. In the case of deposition of particles of different sizes, they are selected from a Poisson distribution, where the particle sizes may vary by 1 order of magnitude. For the deposition of identical particles, only particles which are larger than one lattice parameter of the substrate are considered. We calculate the usual scaling exponents: the roughness, growth, and dynamic exponents alpha, beta, and z, respectively, as well as, the porosity in the bulk, determining the porosity as a function of the particle size. The results of our simulations show that the roughness evolves in time following three different behaviors. The roughness in the initial times behaves as in the random deposition model. At intermediate times, the surface roughness grows slowly and finally, at long times, it enters into the saturation regime. The bulk formed by depositing large particles reveals a porosity that increases very fast at the initial times and also reaches a saturation value. Excepting the case where particles have the size of one lattice spacing, we always find that the surface roughness and porosity reach limiting values at long times. Surprisingly, we find that the scaling exponents are the same as those predicted by the Villain-Lai-Das Sarma equation.
Azzarà, A; Chimenti, M
2004-01-01
One of the main techniques used to explore neutrophil motility, employs micropore filters in chemotactic chambers. Many new models have been proposed, in order to perform multiple microassays in a rapid, inexpensive and reproducible way. In this work, LEGO bricks have been used as chemotactic chambers in the evaluation of neutrophil random motility and chemotaxis and compared with conventional Boyden chambers in a "time-response" experiment. Neutrophil motility throughout the filters was evaluated by means of an image-processing workstation, in which a dedicated algorithm recognizes and counts the cells in several fields and focal planes throughout the whole filter; correlates counts and depth values; performs a statistical analysis of data; calculates the true value of neutrophil migration; determines the distribution of cells; and displays the migration pattern. By this method, we found that the distances travelled by the cells in conventional chambers and in LEGO bricks were perfectly identical, both in random migration and under chemotactic conditions. Moreover, no interference with the physiological behaviour of neutrophils was detectable. In fact, the kinetics of migration was identical both in random migration (characterized by a gaussian pattern) and in chemotaxis (characterized by a typical stimulation peak, previously identified by our workstation). In conclusion, LEGO bricks are extremely precise devices. They are simple to use and allow the use of small amounts of chemoattractant solution and cell suspension, supplying by itself a triplicate test. LEGO bricks are inexpensive, fast and suitable for current diagnostic activity or for research investigations in every laboratory.
Identity-Based Verifiably Encrypted Signatures without Random Oracles
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wu, Qianhong; Qin, Bo
Fair exchange protocol plays an important role in electronic commerce in the case of exchanging digital contracts. Verifiably encrypted signatures provide an optimistic solution to these scenarios with an off-line trusted third party. In this paper, we propose an identity-based verifiably encrypted signature scheme. The scheme is non-interactive to generate verifiably encrypted signatures and the resulting encrypted signature consists of only four group elements. Based on the computational Diffie-Hellman assumption, our scheme is proven secure without using random oracles. To the best of our knowledge, this is the first identity-based verifiably encrypted signature scheme provably secure in the standard model.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.
NASA Astrophysics Data System (ADS)
Li, Yuanyuan; Gao, Guanjun; Zhang, Jie; Zhang, Kai; Chen, Sai; Yu, Xiaosong; Gu, Wanyi
2015-06-01
A simplex-method based optimizing (SMO) strategy is proposed to improve the transmission performance for dispersion uncompensated (DU) coherent optical systems with non-identical spans. Through analytical expression of quality of transmission (QoT), this strategy improves the Q factors effectively, while minimizing the number of erbium-doped optical fiber amplifier (EDFA) that needs to be optimized. Numerical simulations are performed for 100 Gb/s polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) channels over 10-span standard single mode fiber (SSMF) with randomly distributed span-lengths. Compared to the EDFA configurations with complete span loss compensation, the Q factor of the SMO strategy is improved by approximately 1 dB at the optimal transmitter launch power. Moreover, instead of adjusting the gains of all the EDFAs to their optimal value, the number of EDFA that needs to be adjusted for SMO is reduced from 8 to 2, showing much less tuning costs and almost negligible performance degradation.
Reisinger, Florian; Krishna, Ritesh; Ghali, Fawaz; Ríos, Daniel; Hermjakob, Henning; Vizcaíno, Juan Antonio; Jones, Andrew R
2012-03-01
We present a Java application programming interface (API), jmzIdentML, for the Human Proteome Organisation (HUPO) Proteomics Standards Initiative (PSI) mzIdentML standard for peptide and protein identification data. The API combines the power of Java Architecture of XML Binding (JAXB) and an XPath-based random-access indexer to allow a fast and efficient mapping of extensible markup language (XML) elements to Java objects. The internal references in the mzIdentML files are resolved in an on-demand manner, where the whole file is accessed as a random-access swap file, and only the relevant piece of XMLis selected for mapping to its corresponding Java object. The APIis highly efficient in its memory usage and can handle files of arbitrary sizes. The APIfollows the official release of the mzIdentML (version 1.1) specifications and is available in the public domain under a permissive licence at http://www.code.google.com/p/jmzidentml/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liang, Zhongwei; Zhou, Liang; Liu, Xiaochu; Wang, Xiaogang
2014-01-01
It is obvious that tablet image tracking exerts a notable influence on the efficiency and reliability of high-speed drug mass production, and, simultaneously, it also emerges as a big difficult problem and targeted focus during production monitoring in recent years, due to the high similarity shape and random position distribution of those objectives to be searched for. For the purpose of tracking tablets accurately in random distribution, through using surface fitting approach and transitional vector determination, the calibrated surface of light intensity reflective energy can be established, describing the shape topology and topography details of objective tablet. On this basis, the mathematical properties of these established surfaces have been proposed, and thereafter artificial neural network (ANN) has been employed for classifying those moving targeted tablets by recognizing their different surface properties; therefore, the instantaneous coordinate positions of those drug tablets on one image frame can then be determined. By repeating identical pattern recognition on the next image frame, the real-time movements of objective tablet templates were successfully tracked in sequence. This paper provides reliable references and new research ideas for the real-time objective tracking in the case of drug production practices. PMID:25143781
How selection structures species abundance distributions
Magurran, Anne E.; Henderson, Peter A.
2012-01-01
How do species divide resources to produce the characteristic species abundance distributions seen in nature? One way to resolve this problem is to examine how the biomass (or capacity) of the spatial guilds that combine to produce an abundance distribution is allocated among species. Here we argue that selection on body size varies across guilds occupying spatially distinct habitats. Using an exceptionally well-characterized estuarine fish community, we show that biomass is concentrated in large bodied species in guilds where habitat structure provides protection from predators, but not in those guilds associated with open habitats and where safety in numbers is a mechanism for reducing predation risk. We further demonstrate that while there is temporal turnover in the abundances and identities of species that comprise these guilds, guild rank order is conserved across our 30-year time series. These results demonstrate that ecological communities are not randomly assembled but can be decomposed into guilds where capacity is predictably allocated among species. PMID:22787020
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
NASA Technical Reports Server (NTRS)
Jahshan, S. N.; Singleterry, R. C.
2001-01-01
The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue,
Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings
NASA Astrophysics Data System (ADS)
Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső
We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.
Dynamical Localization for Discrete Anderson Dirac Operators
NASA Astrophysics Data System (ADS)
Prado, Roberto A.; de Oliveira, César R.; Carvalho, Silas L.
2017-04-01
We establish dynamical localization for random Dirac operators on the d-dimensional lattice, with d\\in { 1, 2, 3} , in the three usual regimes: large disorder, band edge and 1D. These operators are discrete versions of the continuous Dirac operators and consist in the sum of a discrete free Dirac operator with a random potential. The potential is a diagonal matrix formed by different scalar potentials, which are sequences of independent and identically distributed random variables according to an absolutely continuous probability measure with bounded density and of compact support. We prove the exponential decay of fractional moments of the Green function for such models in each of the above regimes, i.e., (j) throughout the spectrum at larger disorder, (jj) for energies near the band edges at arbitrary disorder and (jjj) in dimension one, for all energies in the spectrum and arbitrary disorder. Dynamical localization in theses regimes follows from the fractional moments method. The result in the one-dimensional regime contrast with one that was previously obtained for 1D Dirac model with Bernoulli potential.
ERIC Educational Resources Information Center
Gutenko, Gregory
Corporate television suffers from at least two "identity crises": departmental isolation, and the lack of a legitimate identity for the corporate video product itself. Video departments are not usually viewed and accepted by the organizational whole as natural evolutions of a historically defined and behaviorally integrated system. The…
Identity Styles and Academic Achievement: Mediating Role of Academic Self-Efficacy
ERIC Educational Resources Information Center
Hejazi, Elaheh; Shahraray, Mehrnaz; Farsinejad, Masomeh; Asgary, Ali
2009-01-01
The purpose of this study was to assess the mediating effect of self-efficacy on the relationship between identity styles and academic achievement. Four-hundred high school students (200 male, 200 female) who were selected through cluster random sampling, completed the Revised Identity Styles Inventory (ISI, 6G) and Morgan-Jink Student Efficacy…
Youth and the Ethics of Identity Play in Virtual Spaces
ERIC Educational Resources Information Center
Siyahhan, Sinem; Barab, Sasha; James, Carrie
2011-01-01
In this study, we explored a new experimental methodology for investigating children's (ages 10 to 14) stances with respect to the ethics of online identity play. We used a scenario about peer identity misrepresentation embedded in a 3D virtual game environment and randomly assigned 265 elementary students (162 female, 103 male) to three…
Restoration of dimensional reduction in the random-field Ising model at five dimensions
NASA Astrophysics Data System (ADS)
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Restoration of dimensional reduction in the random-field Ising model at five dimensions.
Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
A Test of Web and Mail Mode Effects in a Financially Sensitive Survey of Older Americans
Hsu, Joanne W.
2018-01-01
This study leverages a randomized experimental design of a mixed-mode mail- and web-based survey to examine mode effects separately from sample selectivity issues. Using data from the Cognitive Economics Study, which contains some sensitive financial questions, we analyze two sets of questions: fixed-choice questions posed nearly identically across mode, and dollar-value questions that exploit features available only on web mode. Focusing on differences in item nonresponse and response distributions, our results indicate that, in contrast to mail mode, web mode surveys display lower item nonresponse for all questions. While respondents appear to prefer providing financial information in ranges, use of reminder screens on the web version yields greater use of exact values without large sacrifices in item response. Still, response distributions for all questions are similar across mode, suggesting that data on sensitive financial questions collected from the two modes can be pooled.
NASA Technical Reports Server (NTRS)
Fouladi, B.; Waldren, C. A.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)
2000-01-01
We have optimized a pulsed-field gel electrophoresis assay that measures induction and repair of double-strand breaks (DSBs) in specific regions of the genome (Lobrich et al., Proc. Natl. Acad. Sci. USA 92, 12050-12054, 1995). The increased sensitivity resulting from these improvements makes it possible to analyze the size distribution of broken DNA molecules immediately after the introduction of DSBs and after repair incubation. This analysis shows that the distribution of broken DNA pieces after exposure to sparsely ionizing radiation is consistent with the distribution expected from randomly induced DSBs. It is apparent from the distribution of rejoined DNA pieces after repair incubation that DNA ends continue to rejoin between 3 and 24 h postirradiation and that some of these rejoining events are in fact misrejoining events, since novel restriction fragments both larger and smaller than the original fragment are generated after repair. This improved assay was also used to study the kinetics of DSB rejoining and the extent of misrejoining in identical DNA sequences in human GM38 cells and human-hamster hybrid A(L) cells containing a single human chromosome 11. Despite the numerous differences between these cells, which include species and tissue of origin, levels of TP53, expression of telomerase, and the presence or absence of a homologous chromosome for the restriction fragments examined, the kinetics of rejoining of radiation-induced DSBs and the extent of misrejoining were similar in the two cell lines when studied in the G(1) phase of the cell cycle. Furthermore, DSBs were removed from the single-copy human chromosome in the hamster A(L) cells with similar kinetics and misrejoining frequency as at a locus on this hybrid's CHO chromosomes.
NASA Astrophysics Data System (ADS)
Sato, Haruo; Hayakawa, Toshihiko
2014-10-01
Short-period seismograms of earthquakes are complex especially beneath volcanoes, where the S wave mean free path is short and low velocity bodies composed of melt or fluid are expected in addition to random velocity inhomogeneities as scattering sources. Resonant scattering inherent in a low velocity body shows trap and release of waves with a delay time. Focusing of the delay time phenomenon, we have to consider seriously multiple resonant scattering processes. Since wave phases are complex in such a scattering medium, the radiative transfer theory has been often used to synthesize the variation of mean square (MS) amplitude of waves; however, resonant scattering has not been well adopted in the conventional radiative transfer theory. Here, as a simple mathematical model, we study the sequence of isotropic resonant scattering of a scalar wavelet by low velocity spheres at low frequencies, where the inside velocity is supposed to be low enough. We first derive the total scattering cross-section per time for each order of scattering as the convolution kernel representing the decaying scattering response. Then, for a random and uniform distribution of such identical resonant isotropic scatterers, we build the propagator of the MS amplitude by using causality, a geometrical spreading factor and the scattering loss. Using those propagators and convolution kernels, we formulate the radiative transfer equation for a spherically impulsive radiation from a point source. The synthesized MS amplitude time trace shows a dip just after the direct arrival and a delayed swelling, and then a decaying tail at large lapse times. The delayed swelling is a prominent effect of resonant scattering. The space distribution of synthesized MS amplitude shows a swelling near the source region in space, and it becomes a bell shape like a diffusion solution at large lapse times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novikov, V.
1991-05-01
The U.S. Army's detailed equipment decontamination process is a stochastic flow shop which has N independent non-identical jobs (vehicles) which have overlapping processing times. This flow shop consists of up to six non-identical machines (stations). With the exception of one station, the processing times of the jobs are random variables. Based on an analysis of the processing times, the jobs for the 56 Army heavy division companies were scheduled according to the best shortest expected processing time - longest expected processing time (SEPT-LEPT) sequence. To assist in this scheduling the Gap Comparison Heuristic was developed to select the best SEPT-LEPTmore » schedule. This schedule was then used in balancing the detailed equipment decon line in order to find the best possible site configuration subject to several constraints. The detailed troop decon line, in which all jobs are independent and identically distributed, was then balanced. Lastly, an NBC decon optimization computer program was developed using the scheduling and line balancing results. This program serves as a prototype module for the ANBACIS automated NBC decision support system.... Decontamination, Stochastic flow shop, Scheduling, Stochastic scheduling, Minimization of the makespan, SEPT-LEPT Sequences, Flow shop line balancing, ANBACIS.« less
Jakopanec, Irena; Borgen, Katrine; Vold, Line; Lund, Helge; Forseth, Tore; Hannula, Raisa; Nygård, Karin
2008-09-24
On 7 May 2007 the medical officer in Røros (population 5600) reported 15 patients with gastroenteritis. Three days later he estimated hundreds being ill. Untreated tap water from a groundwater source was suspected as the vehicle and chlorination was started 11 May. Campylobacter was isolated from patients' stool samples. We conducted an investigation to identify the source and describe the extent of the outbreak. We undertook a retrospective cohort study among a random sample of customers of Røros and neighbouring Holtålen waterworks. Holtålen, which has a different water source, was used as a control city. We conducted telephone interviews to gather data on illness from all household members. One randomly selected household member was asked about detailed exposure history. The regional hospital laboratory tested patients' stools for enteropathogens. Campylobacter isolates were typed by AFLP for genetic similarity at the Norwegian Institute of Public Health. Local authorities conducted the environmental investigation. We identified 105 cases among 340 individuals from Røros and Holtålen (Attack Rate = 31%). Tap water consumption was the only exposure associated with illness. Among randomly selected household members from Røros, a dose-response relationship was observed in daily consumed glasses of tap water (chi2 for trend = 8.1, p = 0.004). Campylobacter with identical AFLP was isolated from 25 out of 26 submitted stool samples. No pathogens were detected in water samples. We identified several events that might have caused pressure fall and influx of contaminated water into the water distribution system. On two occasions, pressure fall was noticed and parts of the distribution system were outdated. The investigation confirmed a waterborne outbreak of campylobacteriosis in Røros. Although no single event was identified as the cause of contamination, this outbreak illustrates the vulnerability of water distribution systems. Good quality source water alone is not enough to ensure water safety. For a better risk management, more focus should be put on the distribution system security. Waterworks personnel should monitor the pressure regularly; reduce the leakage by upgrading the distribution network and use chlorination when conducting maintenance work.
Jakopanec, Irena; Borgen, Katrine; Vold, Line; Lund, Helge; Forseth, Tore; Hannula, Raisa; Nygård, Karin
2008-01-01
Background On 7 May 2007 the medical officer in Røros (population 5600) reported 15 patients with gastroenteritis. Three days later he estimated hundreds being ill. Untreated tap water from a groundwater source was suspected as the vehicle and chlorination was started 11 May. Campylobacter was isolated from patients' stool samples. We conducted an investigation to identify the source and describe the extent of the outbreak. Methods We undertook a retrospective cohort study among a random sample of customers of Røros and neighbouring Holtålen waterworks. Holtålen, which has a different water source, was used as a control city. We conducted telephone interviews to gather data on illness from all household members. One randomly selected household member was asked about detailed exposure history. The regional hospital laboratory tested patients' stools for enteropathogens. Campylobacter isolates were typed by AFLP for genetic similarity at the Norwegian Institute of Public Health. Local authorities conducted the environmental investigation. Results We identified 105 cases among 340 individuals from Røros and Holtålen (Attack Rate = 31%). Tap water consumption was the only exposure associated with illness. Among randomly selected household members from Røros, a dose-response relationship was observed in daily consumed glasses of tap water (χ2 for trend = 8.1, p = 0.004). Campylobacter with identical AFLP was isolated from 25 out of 26 submitted stool samples. No pathogens were detected in water samples. We identified several events that might have caused pressure fall and influx of contaminated water into the water distribution system. On two occasions, pressure fall was noticed and parts of the distribution system were outdated. Conclusion The investigation confirmed a waterborne outbreak of campylobacteriosis in Røros. Although no single event was identified as the cause of contamination, this outbreak illustrates the vulnerability of water distribution systems. Good quality source water alone is not enough to ensure water safety. For a better risk management, more focus should be put on the distribution system security. Waterworks personnel should monitor the pressure regularly; reduce the leakage by upgrading the distribution network and use chlorination when conducting maintenance work. PMID:18816387
ERIC Educational Resources Information Center
Haslam, S. Alexander; Reicher, Stephen
2007-01-01
The BBC Prison Study was an experimental case study in which participants were randomly assigned to groups as prisoners or guards. This paper examines the impact of interventions designed to increase prisoners' sense of shared social identity on processes of leadership. It presents psychometric, behavioral, and observational data which support the…
Ground States of Random Spanning Trees on a D-Wave 2X
NASA Astrophysics Data System (ADS)
Hall, J. S.; Hobl, L.; Novotny, M. A.; Michielsen, Kristel
The performances of two D-Wave 2 machines (476 and 496 qubits) and of a 1097-qubit D-Wave 2X were investigated. Each chip has a Chimera interaction graph calG . Problem input consists of values for the fields hj and for the two-qubit interactions Ji , j of an Ising spin-glass problem formulated on calG . Output is returned in terms of a spin configuration {sj } , with sj = +/- 1 . We generated random spanning trees (RSTs) uniformly distributed over all spanning trees of calG . On the 476-qubit D-Wave 2, RSTs were generated on the full chip with Ji , j = - 1 and hj = 0 and solved one thousand times. The distribution of solution energies and the average magnetization of each qubit were determined. On both the 476- and 1097-qubit machines, four identical spanning trees were generated on each quadrant of the chip. The statistical independence of these regions was investigated. In another study, on the D-Wave 2X, one hundred RSTs with random Ji , j ∈ { - 1 , 1 } and hj = 0 were generated on the full chip. Each RST problem was solved one hundred times and the number of times the ground state energy was found was recorded. This procedure was repeated for square subgraphs, with dimensions ranging from 7 ×7 to 11 ×11. Supported in part by NSF Grants DGE-0947419 and DMR-1206233. D-Wave time provided by D-Wave Systems and by the USRA Quantum Artificial Intelligence Laboratory Research Opportunity.
Robustness of power systems under a democratic-fiber-bundle-like model
NASA Astrophysics Data System (ADS)
Yaǧan, Osman
2015-06-01
We consider a power system with N transmission lines whose initial loads (i.e., power flows) L1,...,LN are independent and identically distributed with PL(x ) =P [L ≤x ] . The capacity Ci defines the maximum flow allowed on line i and is assumed to be given by Ci=(1 +α ) Li , with α >0 . We study the robustness of this power system against random attacks (or failures) that target a p fraction of the lines, under a democratic fiber-bundle-like model. Namely, when a line fails, the load it was carrying is redistributed equally among the remaining lines. Our contributions are as follows. (i) We show analytically that the final breakdown of the system always takes place through a first-order transition at the critical attack size p=1 -E/[L ] maxx(P [L >x ](α x +E [L |L >x ]) ) , where E [.] is the expectation operator; (ii) we derive conditions on the distribution PL(x ) for which the first-order breakdown of the system occurs abruptly without any preceding diverging rate of failure; (iii) we provide a detailed analysis of the robustness of the system under three specific load distributions—uniform, Pareto, and Weibull—showing that with the minimum load Lmin and mean load E [L ] fixed, Pareto distribution is the worst (in terms of robustness) among the three, whereas Weibull distribution is the best with shape parameter selected relatively large; (iv) we provide numerical results that confirm our mean-field analysis; and (v) we show that p is maximized when the load distribution is a Dirac delta function centered at E [L ] , i.e., when all lines carry the same load. This last finding is particularly surprising given that heterogeneity is known to lead to high robustness against random failures in many other systems.
Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields
NASA Astrophysics Data System (ADS)
Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.
1992-12-01
During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards regularity. For clouds less than 1 km in diameter, the average nearest-neighbor distance is equal to 3-7 cloud diameters. For larger clouds, the ratio of cloud nearest-neighbor distance to cloud diameter increases sharply with increasing cloud diameter. This demonstrates that large clouds inhibit the growth of other large clouds in their vicinity. Nevertheless, this leads to random distributions of large clouds, not regularity.
Peer effects on risk behaviour: the importance of group identity.
Gioia, Francesca
2017-01-01
This paper investigates whether and to what extent group identity plays a role in peer effects on risk behaviour. We run a laboratory experiment in which different levels of group identity are induced through different matching protocols (random or based on individual painting preferences) and the possibility to interact with group members via an online chat in a group task. Risk behaviour is measured by using the Bomb Risk Elicitation Task and peer influence is introduced by giving subjects feedback regarding group members' previous decisions. We find that subjects are affected by their peers when taking decisions and that group identity influences the magnitude of peer effects: painting preferences matching significantly reduces the heterogeneity in risk behaviour compared with random matching. On the other hand, introducing a group task has no significant effect on behaviour, possibly because interaction does not always contribute to enhancing group identity. Finally, relative riskiness within the group matters and individuals whose peers are riskier than they are take on average riskier decisions, even when controlling for regression to the mean.
Super-stable Poissonian structures
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2012-10-01
In this paper we characterize classes of Poisson processes whose statistical structures are super-stable. We consider a flow generated by a one-dimensional ordinary differential equation, and an ensemble of particles ‘surfing’ the flow. The particles start from random initial positions, and are propagated along the flow by stochastic ‘wave processes’ with general statistics and general cross correlations. Setting the initial positions to be Poisson processes, we characterize the classes of Poisson processes that render the particles’ positions—at all times, and invariantly with respect to the wave processes—statistically identical to their initial positions. These Poisson processes are termed ‘super-stable’ and facilitate the generalization of the notion of stationary distributions far beyond the realm of Markov dynamics.
Characteristics of corona impulses from insulated wires subjected to high ac voltages
NASA Technical Reports Server (NTRS)
Doreswamy, C. V.; Crowell, C. S.
1976-01-01
Corona discharges arise due to ionization of air or gas subject to high electric fields. The free electrons and ions contained in these discharges interact with molecules of insulating materials, resulting in chemical changes and destroying the electrical insulating properties. The paper describes some results of measurements aimed at determining corona pulse waveforms, their repetition rate, and amplitude distribution during various randomly-sampled identical time periods of a 60-Hz high-voltage wave. Described are properties of positive and negative corona impulses generated from typical conductors at various test high voltages. A possible method for calculating the energies, densities, and electromagnetic interferences by making use of these results is suggested.
Zipf law: an extreme perspective
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2016-04-01
Extreme value theory (EVT) asserts that the Fréchet law emerges universally from linearly scaled maxima of collections of independent and identically distributed random variables that are positive-valued. Observations of many real-world sizes, e.g. city-sizes, give rise to the Zipf law: if we rank the sizes decreasingly, and plot the log-sizes versus the log-ranks, then an affine line emerges. In this paper we present an EVT approach to the Zipf law. Specifically, we establish that whenever the Fréchet law emerges from the EVT setting, then the Zipf law follows. The EVT generation of the Zipf law, its universality, and its associated phase transition, are analyzed and described in detail.
Magnetocaloric effect in epitaxial La0.56Sr0.44MnO3 alloy and digital heterostructures
NASA Astrophysics Data System (ADS)
Belyea, Dustin D.; Santos, Tiffany S.; Miller, Casey W.
2012-04-01
This work investigates the magnetocaloric effect of two epitaxial manganite heterostructures, one being a single layer La0.56Sr0.44MnO3 alloy with randomly distributed La and Sr cations, the other a digitally synthesized superlattice of LaMnO3 and SrMnO3 fabricated to be compositionally identical to the alloy. The magnetic entropy change and relative cooling power were larger for the alloy than the superlattice, though both are suppressed relative to bulk materials. These results indicate that disorder of the A-site cation species in the perovskite structure may play a crucial role in defining the magnetocaloric effect in complex oxide materials.
NASA Astrophysics Data System (ADS)
Danila, B.; McGurn, A. R.
2005-03-01
A theoretical discussion is given of the diffuse scattering of p -polarized electromagnetic waves from a vacuum-dielectric interface characterized by a one-dimensional disorder in the form of parallel, Gaussian shaped, dielectric ridges positioned at random on a planar semi-infinite dielectric substrate. The parameters of the surface roughness are chosen so that the surface is characterized as weakly rough with a low ridge concentration. The emphasis is on phase coherent features in the speckle pattern of light scattered from the surface. These features are determined from the intensity-intensity correlation function of the speckle pattern and are studied as functions of the frequency of light for frequencies near the dielectric frequency resonances of the ridge material. In the first part of the study, the ridges on the substrate are taken to be identical, made from either GaAs, NaF, or ZnS. The substrate for all cases is CdS. In a second set of studies, the heights and widths of the ridges are statistically distributed. The effects of these different types of randomness on the scattering from the random array of dielectric ridges is determined near the dielectric resonance frequency of the ridge material. The work presented is an extension of studies [A. B. McGurn and R. M. Fitzgerald, Phys. Rev. B 65, 155414 (2002)] that originally treated only the differential reflection coefficient of the diffuse scattering of light (not speckle correlation functions) from a system of identical ridges. The object of the present work is to demonstrate the effects of the dielectric frequency resonances of the ridge materials on the phase coherent features found in the speckle patterns of the diffusely scattered light. The dielectric frequency resonances are shown to enhance the observation of the weak localization of electromagnetic surface waves at the random interface. The frequencies treated in this work are in the infrared. Previous weak localization studies have concentrated mainly on the visible and ultraviolet.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix
NASA Astrophysics Data System (ADS)
Charnotskii, Mikhail
2017-08-01
Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.
Approximate Genealogies Under Genetic Hitchhiking
Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.
2006-01-01
The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733
Scaling of flow distance in random self-similar channel networks
Troutman, B.M.
2005-01-01
Natural river channel networks have been shown in empirical studies to exhibit power-law scaling behavior characteristic of self-similar and self-affine structures. Of particular interest is to describe how the distribution of distance to the outlet changes as a function of network size. In this paper, networks are modeled as random self-similar rooted tree graphs and scaling of distance to the root is studied using methods in stochastic branching theory. In particular, the asymptotic expectation of the width function (number of nodes as a function of distance to the outlet) is derived under conditions on the replacement generators. It is demonstrated further that the branching number describing rate of growth of node distance to the outlet is identical to the length ratio under a Horton-Strahler ordering scheme as order gets large, again under certain restrictions on the generators. These results are discussed in relation to drainage basin allometry and an application to an actual drainage network is presented. ?? World Scientific Publishing Company.
[Can the local energy minimization refine the PDB structures of different resolution universally?].
Godzi, M G; Gromova, A P; Oferkin, I V; Mironov, P V
2009-01-01
The local energy minimization was statistically validated as the refinement strategy for PDB structure pairs of different resolution. Thirteen pairs of structures with the only difference in resolution were extracted from PDB, and the structures of 11 identical proteins obtained by different X-ray diffraction techniques were represented. The distribution of RMSD value was calculated for these pairs before and after the local energy minimization of each structure. The MMFF94 field was used for energy calculations, and the quasi-Newton method was used for local energy minimization. By comparison of these two RMSD distributions, the local energy minimization was proved to statistically increase the structural differences in pairs so that it cannot be used for refinement purposes. To explore the prospects of complex refinement strategies based on energy minimization, randomized structures were obtained by moving the initial PDB structures as far as the minimized structures had been moved in a multidimensional space of atomic coordinates. For these randomized structures, the RMSD distribution was calculated and compared with that for minimized structures. The significant differences in their mean values proved the energy surface of the protein to have only few minima near the conformations of different resolution obtained by X-ray diffraction for PDB. Some other results obtained by exploring the energy surface near these conformations are also presented. These results are expected to be very useful for the development of new protein refinement strategies based on energy minimization.
Bounds on the conductivity of a suspension of random impenetrable spheres
NASA Astrophysics Data System (ADS)
Beasley, J. D.; Torquato, S.
1986-11-01
We compare the general Beran bounds on the effective electrical conductivity of a two-phase composite to the bounds derived by Torquato for the specific model of spheres distributed throughout a matrix phase. For the case of impenetrable spheres, these bounds are shown to be identical and to depend on the microstructure through the sphere volume fraction φ2 and a three-point parameter ζ2, which is an integral over a three-point correlation function. We evaluate ζ2 exactly through third order in φ2 for distributions of impenetrable spheres. This expansion is compared to the analogous results of Felderhof and of Torquato and Lado, all of whom employed the superposition approximation for the three-particle distribution function involved in ζ2. The results indicate that the exact ζ2 will be greater than the value calculated under the superposition approximation. For reasons of mathematical analogy, the results obtained here apply as well to the determination of the thermal conductivity, dielectric constant, and magnetic permeability of composite media and the diffusion coefficient of porous media.
Coloring geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Percus, Allon; Muller, Tobias
We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyzemore » the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.« less
On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial
ERIC Educational Resources Information Center
Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean
2017-01-01
Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…
Montgomery, Rhonda J V; Kwak, Jung; Kosloski, Karl; O'Connell Valuch, Katharine
2011-09-01
We examined the effects of a manualized care management protocol specifically designed for care managers working with caregivers, the Tailored Caregiver Assessment and Referral® (TCARE®) protocol, on caregiver identity discrepancy, burden, and depressive symptoms. Preliminary data from a longitudinal, randomized, controlled intervention study with 266 family caregivers served by 52 care managers in 4 states were analyzed using repeated measures random effects regression procedures. Caregivers in the intervention and control groups were repeatedly assessed for up to 9 months on caregiver identity discrepancy, 3 areas of caregiving burden-objective, relationship, and stress burdens; depression; and intention for nursing home placement. We found significant group by time interaction effects for caregiver identity discrepancy, relationship burden, stress burden, depression, and intention for nursing home placement. Caregivers in the intervention group experienced significant improvement on these measures, whereas caregivers in the control group worsened on these measures over time. The preliminary findings provide strong support for effectiveness of the TCARE® protocol on improving caregiver well-being and mental health outcomes.
Kwak, Jung; Kosloski, Karl; O’Connell Valuch, Katharine
2011-01-01
Objectives. We examined the effects of a manualized care management protocol specifically designed for care managers working with caregivers, the Tailored Caregiver Assessment and Referral® (TCARE®) protocol, on caregiver identity discrepancy, burden, and depressive symptoms. Methods. Preliminary data from a longitudinal, randomized, controlled intervention study with 266 family caregivers served by 52 care managers in 4 states were analyzed using repeated measures random effects regression procedures. Caregivers in the intervention and control groups were repeatedly assessed for up to 9 months on caregiver identity discrepancy, 3 areas of caregiving burden—objective, relationship, and stress burdens; depression; and intention for nursing home placement. Results. We found significant group by time interaction effects for caregiver identity discrepancy, relationship burden, stress burden, depression, and intention for nursing home placement. Caregivers in the intervention group experienced significant improvement on these measures, whereas caregivers in the control group worsened on these measures over time. Discussion. The preliminary findings provide strong support for effectiveness of the TCARE® protocol on improving caregiver well-being and mental health outcomes. PMID:21840840
SETI and SEH (Statistical Equation for Habitables)
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-01-01
The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.
Shades of Threat: Racial Identity as a Moderator of Stereotype Threat
ERIC Educational Resources Information Center
Davis, Claytie, III; Aronson, Joshua; Salinas, Moises
2006-01-01
This study investigated Black racial identity attitudes as a moderator of intellectual performance in potentially stereotype threatening situations. Ninety-eight African American students were randomly assigned to one of three stereotype threatening conditions: low threat, medium threat, or high threat. Analyses confirmed a stereotype threat…
On the apparent positions of T Tauri stars in the H-R diagram
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenyon, S.J.; Hartmann, L.W.
1990-01-01
The spread in apparent luminosities of T Tauri stars caused by occultation and emission from protostellar disks is investigated. A random distribution of disk inclination angles, coupled with a plausible range of accretion rates, introduces a significant scatter in apparent luminosities for intrinsically identical stars. The observed dispersion of luminosities for K7-M1 Hayashi track stars thought to have disks in Taurus-Auriga is similar to predictions of the simple accretion disk model, which suggets that age determinations form many pre-main-sequence stars are uncertain. The results also suggest that Stahler's birthline for convective track pre-main-sequence stars may be located at slightly lowermore » luminosities than previously thought. 38 refs.« less
Impact basins in Southern Daedalia, Mars: Evidence for clustered impactors?
NASA Technical Reports Server (NTRS)
Frey, Herbert; Roark, James H.
1994-01-01
The distribution of ancient massifs and old cratered terrain in the southern Daedalia region indicate the presence of at least two and probably three impact basins of large size. One of these is located near where Craddock et al. placed their center for the Daedalia Basin, but it has very different ring diameters. These basins have rings exceeding 1000 km diameter and overlap significantly with centers separated by 500 to 600 km at nearly identical latitudes of -26 to -29 deg. The smaller westernmost basin appears slightly better preserved, but there is little evidence for obvious superposition that might imply a temporal sequence. Recognizing the improbability of random impacts producing aligned, nearly contemporaneous features, we suggest these basins may have resulted from clustered impactors.
27 CFR 555.105 - Distributions to nonlicensees, nonpermittees, and limited permittees.
Code of Federal Regulations, 2011 CFR
2011-04-01
... otherwise distributing explosive materials to a business entity must verify the identity of the... distributor's premises, the distributor must in all instances verify the identity of the person accepting... sporting, recreational, or cultural purposes in antique firearms as defined in 18 U.S.C. 921(a)(16), or in...
Sexual Identity Mobility and Depressive Symptoms: A Longitudinal Analysis of Sexual Minority Women
Everett, Bethany; Talley, Amelia; Hughes, Tonda; Wilsnack, Sharon; Johnson, Timothy P.
2016-01-01
Sexual minority identity (bisexual, lesbian) is a known risk factor for depression in women. This study examines a facet of minority stress prevalent among women—sexual identity mobility—as an identity-related contributor to higher levels of depressive symptoms. We used three waves of data from the Chicago Health and Life Experiences of Women (CHLEW) study, a longitudinal study of sexual minority women (N = 306). Random effects OLS regression models were constructed to examine the effect of sexual-identity changes on depressive symptoms. We found that 25.6% of the sample reported a sexual-identity change between Wave I and Wave II, and 24.91% reported a sexual identity change between Waves II and III. Women who reported a change in sexual identity also reported more depressive symptoms subsequent to identity change. This effect was moderated by the number of years participants’ had reported their baseline identity and by whether the participant had initiated a romantic relationship with a male partner. PMID:27255306
Recovering Galaxy Properties Using Gaussian Process SED Fitting
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Awan, Humna
2018-01-01
Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.
Identity adjustment among Afghanistan and Iraq war veterans with reintegration difficulty.
Orazem, Robert J; Frazier, Patricia A; Schnurr, Paula P; Oleson, Heather E; Carlson, Kathleen F; Litz, Brett T; Sayer, Nina A
2017-08-01
To examine perceptions of identity adjustment in a diverse, national sample of U.S. veterans of the wars in Afghanistan and Iraq. The authors conducted a planned thematic analysis of text written by Afghanistan and Iraq war veterans when they were asked to describe their reintegration difficulties as part of a randomized controlled trial (RCT) of online expressive writing (Sayer et al., 2015). Participants were 100 randomly selected veterans from the larger study (42 women and 58 men, 60 active duty and 38 reserves or National Guard). Nearly 2/3s of participants wrote about their identity adjustment. The 5 interrelated areas of identity adjustment difficulty were (a) feeling like one does not belong in civilian society, (b) missing the military's culture and structured lifestyle, (c) holding negative views of civilian society, (d) feeling left behind compared to civilian counterparts due to military service, and (e) having difficulty finding meaning in the civilian world. The authors did not observe differences by gender. However, those deployed from active duty were particularly likely to feel as if they did not belong in civilian society and that they had not acquired needed skills, whereas those deployed from the reserves or National Guard experienced difficulty in reestablishing former civilian identities. Identity adjustment is a critical yet understudied aspect of veteran reintegration into community life following combat deployment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hoeffding Type Inequalities and their Applications in Statistics and Operations Research
NASA Astrophysics Data System (ADS)
Daras, Tryfon
2007-09-01
Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.
Soto-Centeno, J Angel; Barrow, Lisa N; Allen, Julie M; Reed, David L
2013-01-01
We evaluated the mtDNA divergence and relationships within Geomys pinetis to assess the status of formerly recognized Geomys taxa. Additionally, we integrated new hypothesis-based tests in ecological niche models (ENM) to provide greater insight into causes for divergence and potential barriers to gene flow in Southeastern United States (Alabama, Florida, and Georgia). Our DNA sequence dataset confirmed and strongly supported two distinct lineages within G. pinetis occurring east and west of the ARD. Divergence date estimates showed that eastern and western lineages diverged about 1.37 Ma (1.9 Ma–830 ka). Predicted distributions from ENMs were consistent with molecular data and defined each population east and west of the ARD with little overlap. Niche identity and background similarity tests were statistically significant suggesting that ENMs from eastern and western lineages are not identical or more similar than expected based on random localities drawn from the environmental background. ENMs also support the hypothesis that the ARD represents a ribbon of unsuitable climate between more suitable areas where these populations are distributed. The estimated age of divergence between eastern and western lineages of G. pinetis suggests that the divergence was driven by climatic conditions during Pleistocene glacial–interglacial cycles. The ARD at the contact zone of eastern and western lineages of G. pinetis forms a significant barrier promoting microgeographic isolation that helps maintain ecological and genetic divergence. PMID:23789071
ERIC Educational Resources Information Center
Kpaduwa, Fidelis Iheanyi
2010-01-01
This current quantitative correlational research study evaluated the residential consumers' knowledge of wireless network security and its relationship with identity theft. Data analysis was based on a sample of 254 randomly selected students. All the study participants completed a survey questionnaire designed to measure their knowledge of…
ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES
van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.
2014-01-01
SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298
Random matrix approach to cross correlations in financial data
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
Patient privacy protection using anonymous access control techniques.
Weerasinghe, D; Rajarajan, M; Elmufti, K; Rakocevic, V
2008-01-01
The objective of this study is to develop a solution to preserve security and privacy in a healthcare environment where health-sensitive information will be accessed by many parties and stored in various distributed databases. The solution should maintain anonymous medical records and it should be able to link anonymous medical information in distributed databases into a single patient medical record with the patient identity. In this paper we present a protocol that can be used to authenticate and authorize patients to healthcare services without providing the patient identification. Healthcare service can identify the patient using separate temporary identities in each identification session and medical records are linked to these temporary identities. Temporary identities can be used to enable record linkage and reverse track real patient identity in critical medical situations. The proposed protocol provides main security and privacy services such as user anonymity, message privacy, message confidentiality, user authentication, user authorization and message replay attacks. The medical environment validates the patient at the healthcare service as a real and registered patient for the medical services. Using the proposed protocol, the patient anonymous medical records at different healthcare services can be linked into one single report and it is possible to securely reverse track anonymous patient into the real identity. The protocol protects the patient privacy with a secure anonymous authentication to healthcare services and medical record registries according to the European and the UK legislations, where the patient real identity is not disclosed with the distributed patient medical records.
Owen, R L; Bhalla, D K
1983-10-01
M cells in Peyer's patch follicle epithelium endocytose and transport luminal materials to intraepithelial lymphocytes. We examined (1) enzymatic characteristics of the epithelium covering mouse and rat Peyer's patches by using cytochemical techniques, (2) distribution of lectin-binding sites by peroxidase-labeled lectins, and (3) anionic site distribution by using cationized ferritin to develop a profile of M cell surface properties. Alkaline phosphatase activity resulted in deposits of dense reaction product over follicle surfaces but was markedly reduced over M cells, unlike esterase which formed equivalent or greater product over M cells. Concanavalin A, ricinus communis agglutinin, wheat germ agglutinin and peanut agglutinin reacted equally with M cells and with surrounding enterocytes over follicle surfaces. Cationized ferritin distributed in a random fashion along microvillus membranes of both M cells and enterocytes, indicating equivalent anionic site distribution. Staining for alkaline phosphatase activity provides a new approach for distinguishing M cells from enterocytes at the light microscopic level. Identical binding of lectins indicates that M cells and enterocytes share common glycoconjugates even though molecular groupings may differ. Lectin binding and anionic charge similarities of M cells and enterocytes may facilitate antigen sampling by M cells of particles and compounds that adhere to intestinal surfaces in non-Peyer's patch areas.
Ethnic Identity and Personal Well-Being of People of Color: A Meta-Analysis
ERIC Educational Resources Information Center
Smith, Timothy B.; Silva, Lynda
2011-01-01
This article summarizes research examining the relationship between the constructs of ethnic identity and personal well-being among people of color in North America. Data from 184 studies analyzed with random effects models yielded an omnibus effect size of r = 0.17, suggesting a modest relationship between the 2 constructs. The relationship was…
Analysis of Ego Identity Status of School of Physical Education and Sports
ERIC Educational Resources Information Center
Turan, Mehmet Behzat; Koç, Kenan; Karaoglu, Baris
2017-01-01
This study aimed to analyze ego identity status of the candidates who studied in school of physical education and sports. For this purpose, randomly selected 651 individuals, who attended to Kayseri Erciyes University, school of physical education and sports, were included to this study. In this research, Extended Objective Measure of Ego Identity…
Heck, Nicholas C; Mirabito, Lucas A; LeMaire, Kelly; Livingston, Nicholas A; Flentje, Annesa
2017-01-01
The current study examined the frequency with which randomized controlled trials (RCTs) of behavioral and psychological interventions for anxiety and depression include data pertaining to participant sexual orientation and nonbinary gender identities. Using systematic review methodology, the databases PubMed and PsycINFO were searched to identify RCTs published in 2004, 2009, and 2014. Random selections of 400 articles per database per year (2,400 articles in total) were considered for inclusion in the review. Articles meeting inclusion criteria were read and coded by the research team to identify whether the trial reported data pertaining to participant sexual orientation and nonbinary gender identities. Additional trial characteristics were also identified and indexed in our database (e.g., sample size, funding source). Of the 232 articles meeting inclusion criteria, only 1 reported participants' sexual orientation, and zero articles included nonbinary gender identities. A total of 52,769 participants were represented in the trials, 93 of which were conducted in the United States, and 43 acknowledged the National Institutes of Health as a source of funding. Despite known mental health disparities on the basis of sexual orientation and nonbinary gender identification, researchers evaluating interventions for anxiety and depression are not reporting on these important demographic characteristics. Reporting practices must change to ensure that our interventions generalize to lesbian, gay, bisexual, and transgender persons. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Probabilistic pathway construction.
Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha
2011-07-01
Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.
Stochastic characterization of phase detection algorithms in phase-shifting interferometry
Munteanu, Florin
2016-11-01
Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less
Percolation of disordered jammed sphere packings
NASA Astrophysics Data System (ADS)
Ziff, Robert M.; Torquato, Salvatore
2017-02-01
We determine the site and bond percolation thresholds for a system of disordered jammed sphere packings in the maximally random jammed state, generated by the Torquato-Jiao algorithm. For the site threshold, which gives the fraction of conducting versus non-conducting spheres necessary for percolation, we find {{p}\\text{c}}=0.3116(3) , consistent with the 1979 value of Powell 0.310(5) and identical within errors to the threshold for the simple-cubic lattice, 0.311 608, which shares the same average coordination number of 6. In terms of the volume fraction ϕ, the threshold corresponds to a critical value {φ\\text{c}}=0.199 . For the bond threshold, which apparently was not measured before, we find {{p}\\text{c}}=0.2424(3) . To find these thresholds, we considered two shape-dependent universal ratios involving the size of the largest cluster, fluctuations in that size, and the second moment of the size distribution; we confirmed the ratios’ universality by also studying the simple-cubic lattice with a similar cubic boundary. The results are applicable to many problems including conductivity in random mixtures, glass formation, and drug loading in pharmaceutical tablets.
An Improved Internal Consistency Reliability Estimate.
ERIC Educational Resources Information Center
Cliff, Norman
1984-01-01
The proposed coefficient is derived by assuming that the average Goodman-Kruskal gamma between items of identical difficulty would be the same for items of different difficulty. An estimate of covariance between items of identical difficulty leads to an estimate of the correlation between two tests with identical distributions of difficulty.…
The Dutch Identity: A New Tool for the Study of Item Response Models.
ERIC Educational Resources Information Center
Holland, Paul W.
1990-01-01
The Dutch Identity is presented as a useful tool for expressing the basic equations of item response models that relate the manifest probabilities to the item response functions and the latent trait distribution. Ways in which the identity may be exploited are suggested and illustrated. (SLD)
Strengthening quitter self-identity: An experimental study.
Meijer, Eline; Gebhardt, Winifred A; van Laar, Colette; van den Putte, Bas; Evers, Andrea W M
2018-06-10
Smoking-related self-identity processes are important for smoking cessation. We examined whether quitter self-identity (i.e. identification with quitting smoking) could be strengthened through a writing exercise, and whether expected social support for quitting, manipulated through vignettes, could facilitate identification with quitting. Participants (N = 339 daily smokers) were randomly assigned to a 2 (identity: strengthened quitter self-identity vs. control) × 3 (social support: present vs. absent vs. neutral control) between-participants design. The main outcome was post-test quitter self-identity. Post-test quitter self-identity was not strengthened successfully. Only a small and marginally significant intervention effect was found on quitter self-identity, which did not generalise to positively influence quit-intention or behaviour. The social support manipulation did not facilitate quitter self-identity. Secondary content analyses showed that quitter self-identity was strengthened among participants who linked quitting smoking to their lifestyle, wanted to become quitters for health reasons, and whose reasons for becoming quitters included approach of positive aspects of quitting, but not among participants who linked quitter self-identity to their self-perceptions. Results provide insight into the content of smokers' self-conceptualizations as quitters. Writing exercises should be improved and tested to eventually successfully strengthen quitter identities.
Topology of large-scale structure in seeded hot dark matter models
NASA Technical Reports Server (NTRS)
Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.
1992-01-01
The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.
Ultrafast energy relaxation in single light-harvesting complexes
Maly, Pavel; Gruber, J. Michael; Cogdell, Richard J.; ...
2016-02-22
Energy relaxation in light-harvesting complexes has been extensively studied by various ultrafast spectroscopic techniques, the fastest processes being in the sub–100-fs range. At the same time, much slower dynamics have been observed in individual complexes by single-molecule fluorescence spectroscopy (SMS). In this work, we use a pump–probe-type SMS technique to observe the ultrafast energy relaxation in single light-harvesting complexes LH2 of purple bacteria. After excitation at 800 nm, the measured relaxation time distribution of multiple complexes has a peak at 95 fs and is asymmetric, with a tail at slower relaxation times. When tuning the excitation wavelength, the distribution changesmore » in both its shape and position. The observed behavior agrees with what is to be expected from the LH2 excited states structure. As we show by a Redfield theory calculation of the relaxation times, the distribution shape corresponds to the expected effect of Gaussian disorder of the pigment transition energies. By repeatedly measuring few individual complexes for minutes, we find that complexes sample the relaxation time distribution on a timescale of seconds. Furthermore, by comparing the distribution from a single long-lived complex with the whole ensemble, we demonstrate that, regarding the relaxation times, the ensemble can be considered ergodic. Lastly, our findings thus agree with the commonly used notion of an ensemble of identical LH2 complexes experiencing slow random fluctuations.« less
Ultrafast energy relaxation in single light-harvesting complexes.
Malý, Pavel; Gruber, J Michael; Cogdell, Richard J; Mančal, Tomáš; van Grondelle, Rienk
2016-03-15
Energy relaxation in light-harvesting complexes has been extensively studied by various ultrafast spectroscopic techniques, the fastest processes being in the sub-100-fs range. At the same time, much slower dynamics have been observed in individual complexes by single-molecule fluorescence spectroscopy (SMS). In this work, we use a pump-probe-type SMS technique to observe the ultrafast energy relaxation in single light-harvesting complexes LH2 of purple bacteria. After excitation at 800 nm, the measured relaxation time distribution of multiple complexes has a peak at 95 fs and is asymmetric, with a tail at slower relaxation times. When tuning the excitation wavelength, the distribution changes in both its shape and position. The observed behavior agrees with what is to be expected from the LH2 excited states structure. As we show by a Redfield theory calculation of the relaxation times, the distribution shape corresponds to the expected effect of Gaussian disorder of the pigment transition energies. By repeatedly measuring few individual complexes for minutes, we find that complexes sample the relaxation time distribution on a timescale of seconds. Furthermore, by comparing the distribution from a single long-lived complex with the whole ensemble, we demonstrate that, regarding the relaxation times, the ensemble can be considered ergodic. Our findings thus agree with the commonly used notion of an ensemble of identical LH2 complexes experiencing slow random fluctuations.
Ultrafast energy relaxation in single light-harvesting complexes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maly, Pavel; Gruber, J. Michael; Cogdell, Richard J.
Energy relaxation in light-harvesting complexes has been extensively studied by various ultrafast spectroscopic techniques, the fastest processes being in the sub–100-fs range. At the same time, much slower dynamics have been observed in individual complexes by single-molecule fluorescence spectroscopy (SMS). In this work, we use a pump–probe-type SMS technique to observe the ultrafast energy relaxation in single light-harvesting complexes LH2 of purple bacteria. After excitation at 800 nm, the measured relaxation time distribution of multiple complexes has a peak at 95 fs and is asymmetric, with a tail at slower relaxation times. When tuning the excitation wavelength, the distribution changesmore » in both its shape and position. The observed behavior agrees with what is to be expected from the LH2 excited states structure. As we show by a Redfield theory calculation of the relaxation times, the distribution shape corresponds to the expected effect of Gaussian disorder of the pigment transition energies. By repeatedly measuring few individual complexes for minutes, we find that complexes sample the relaxation time distribution on a timescale of seconds. Furthermore, by comparing the distribution from a single long-lived complex with the whole ensemble, we demonstrate that, regarding the relaxation times, the ensemble can be considered ergodic. Lastly, our findings thus agree with the commonly used notion of an ensemble of identical LH2 complexes experiencing slow random fluctuations.« less
Umaña-Taylor, Adriana J; Kornienko, Olga; Douglass Bayless, Sara; Updegraff, Kimberly A
2018-01-01
Ethnic-racial identity formation represents a key developmental task that is especially salient during adolescence and has been associated with many indices of positive adjustment. The Identity Project intervention, which targeted ethnic-racial identity exploration and resolution, was designed based on the theory that program-induced changes in ethnic-racial identity would lead to better psychosocial adjustment (e.g., global identity cohesion, self-esteem, mental health, academic achievement). Adolescents (N =215; Mage =15.02, SD =.68; 50% female) participated in a small-scale randomized control trial with an attention control group. A cascading mediation model was tested using pre-test and three follow-up assessments (12, 18, and 67 weeks after baseline). The program led to increases in exploration, subsequent increases in resolution and, in turn, higher global identity cohesion, higher self-esteem, lower depressive symptoms, and better grades. Results support the notion that increasing adolescents' ethnic-racial identity can promote positive psychosocial functioning among youth.
NASA Astrophysics Data System (ADS)
Fagents, S. A.; Hamilton, C. W.
2009-12-01
Nearest neighbor (NN) analysis enables the identification of landforms using non-morphological parameters and can be useful for constraining the geological processes contributing to observed patterns of spatial distribution. Explosive interactions between lava and water can generate volcanic rootless cone (VRC) groups that are well suited to geospatial analyses because they consist of a large number of landforms that share a common formation mechanism. We have applied NN analysis tools to quantitatively compare the spatial distribution of VRCs in the Laki lava flow in Iceland to analogous landforms in the Tartarus Colles Region of eastern Elysium Planitia, Mars. Our results show that rootless eruption sites on both Earth and Mars exhibit systematic variations in spatial organization that are related to variations in the distribution of resources (lava and water) at different scales. Field observations in Iceland reveal that VRC groups are composite structures formed by the emplacement of chronologically and spatially distinct domains. Regionally, rootless cones cluster into groups and domains, but within domains NN distances exhibit random to repelled distributions. This suggests that on regional scales VRCs cluster in locations that contain sufficient resources, whereas on local scales rootless eruption sites tend to self-organize into distributions that maximize the utilization of limited resources (typically groundwater). Within the Laki lava flow, near-surface water is abundant and pre-eruption topography appears to exert the greatest control on both lava inundation regions and clustering of rootless eruption sites. In contrast, lava thickness appears to be the controlling factor in the formation of rootless eruption sites in the Tartarus Colles Region. A critical lava thickness may be required to initiate rootless eruptions on Mars because the lava flows must contain sufficient heat for transferred thermal energy to reach the underlying cryosphere and volatilize buried ground ice. In both environments, the spatial distribution of rootless eruption sites on local scales may either be random, which indicates that rootless eruption sites form independently of one another, or repelled, which implies resource limitation. Where competition for limited groundwater causes rootless eruption sites to develop greater than random NN separation, rootless eruption sites can be modeled as a system of pumping wells that extract water from a shared aquifer, thereby generating repelled distributions due to non-initiation or early cessation of rootless explosive activity at sites with insufficient access to groundwater. Thus statistical NN analyses can be combined with field observations and remote sensing to obtain information about self-organization processes within geological systems and the effects of environmental resource limitation on the spatial distribution of volcanic landforms. NN analyses may also be used to quantitatively compare the spatial distribution of landforms in different planetary environments and for supplying non-morphological evidence to discriminate between feature identities and geological formation mechanisms.
Emoto, Akira; Fukuda, Takashi
2013-02-20
For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.
Use of Biometrics within Sub-Saharan Refugee Communities
2013-12-01
fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity. Biometrics creates and...Biometrics typically comprises fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity...authentication because it identifies an individual based on mathematical analysis of the random pattern visible within the iris. Facial recognition is
The Effect of Ethnic Identity and Bilingual Confidence on Chinese Youth's Self-Esteem
ERIC Educational Resources Information Center
Lee, Jennifer Wen-shya
2008-01-01
This study examines the interrelated issues of private and public domains of self-esteem, ethnic identity formation, and bilingual confidence among youth of a minority group in a city in western Canada. One hundred, ten Chinese students aged 11-18 from a Chinese-language school were randomly surveyed. Most items of the instrument are derived from…
Moran, Meghan Bridgid; Sussman, Steve
2014-01-01
Social identity is a construct that has been linked to health behavior. Yet, limited research has attempted to translate this relationship into health communication strategies. The current study addresses this gap by examining the efficacy of social identity targeting (constructing ads so that they target a specific group with which an individual identifies) to increase anti-cigarette smoking beliefs among adolescents. Two hundred and fifty one adolescents aged 12-15, randomly selected from a nationally representative sample, completed an online survey. Participants indicated which of 11 peer groups (determined in pre-testing) they most identified with. Each participant was then randomly assigned to view an ad that either did or did not target that group. One week later participants reported level of agreement with two key antismoking beliefs presented in the ad. Multiple regression analyses indicated that if an individual identified with the group targeted by the ad, antismoking beliefs were more strongly endorsed. Based on these findings, we conclude that social identity targeting has the potential to increase the effectiveness of antismoking messages and should be considered when designing antismoking campaigns.
Weighted Distances in Scale-Free Configuration Models
NASA Astrophysics Data System (ADS)
Adriaans, Erwin; Komjáthy, Júlia
2018-01-01
In this paper we study first-passage percolation in the configuration model with empirical degree distribution that follows a power-law with exponent τ \\in (2,3) . We assign independent and identically distributed (i.i.d.) weights to the edges of the graph. We investigate the weighted distance (the length of the shortest weighted path) between two uniformly chosen vertices, called typical distances. When the underlying age-dependent branching process approximating the local neighborhoods of vertices is found to produce infinitely many individuals in finite time—called explosive branching process—Baroni, Hofstad and the second author showed in Baroni et al. (J Appl Probab 54(1):146-164, 2017) that typical distances converge in distribution to a bounded random variable. The order of magnitude of typical distances remained open for the τ \\in (2,3) case when the underlying branching process is not explosive. We close this gap by determining the first order of magnitude of typical distances in this regime for arbitrary, not necessary continuous edge-weight distributions that produce a non-explosive age-dependent branching process with infinite mean power-law offspring distributions. This sequence tends to infinity with the amount of vertices, and, by choosing an appropriate weight distribution, can be tuned to be any growing function that is O(log log n) , where n is the number of vertices in the graph. We show that the result remains valid for the the erased configuration model as well, where we delete loops and any second and further edges between two vertices.
Summing Feynman graphs by Monte Carlo: Planar ϕ3-theory and dynamically triangulated random surfaces
NASA Astrophysics Data System (ADS)
Boulatov, D. V.; Kazakov, V. A.
1988-12-01
New combinatorial identities are suggested relating the ratio of (n - 1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γstr (string susceptibility) in planar ϕ3-theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D = 1 the exact critical properties of the theory are reproduced numerically. After August 3, 1988 the address will be: Cybernetics Council, Academy of Science, ul. Vavilova 40, 117333 Moscow, USSR.
Inferring microhabitat preferences of Lilium catesbaei (Liliaceae).
Sommers, Kristen Penney; Elswick, Michael; Herrick, Gabriel I; Fox, Gordon A
2011-05-01
Microhabitat studies use varied statistical methods, some treating site occupancy as a dependent and others as an independent variable. Using the rare Lilium catesbaei as an example, we show why approaches to testing hypotheses of differences between occupied and unoccupied sites can lead to erroneous conclusions about habitat preferences. Predictive approaches like logistic regression can better lead to understanding of habitat requirements. Using 32 lily locations and 30 random locations >2 m from a lily (complete data: 31 lily and 28 random spots), we measured physical conditions--photosynthetically active radiation (PAR), canopy cover, litter depth, distance to and height of nearest shrub, and soil moisture--and number and identity of neighboring plants. Twelve lilies were used to estimate a photosynthetic assimilation curve. Analyses used logistic regression, discriminant function analysis (DFA), (multivariate) analysis of variance, and resampled Wilcoxon tests. Logistic regression and DFA found identical predictors of presence (PAR, canopy cover, distance to shrub, litter), but hypothesis tests pointed to a different set (PAR, litter, canopy cover, height of nearest shrub). Lilies are mainly in high-PAR spots, often close to light saturation. By contrast, PAR in random spots was often near the lily light compensation point. Lilies were near Serenoa repens less than at random; otherwise, neighbor identity had no significant effect. Predictive methods are more useful in this context than the hypothesis tests. Light availability plays a big role in lily presence, which may help to explain increases in flowering and emergence after fire and roller-chopping.
Odor identity coding by distributed ensembles of neurons in the mouse olfactory cortex
Roland, Benjamin; Deneux, Thomas; Franks, Kevin M; Bathellier, Brice; Fleischmann, Alexander
2017-01-01
Olfactory perception and behaviors critically depend on the ability to identify an odor across a wide range of concentrations. Here, we use calcium imaging to determine how odor identity is encoded in olfactory cortex. We find that, despite considerable trial-to-trial variability, odor identity can accurately be decoded from ensembles of co-active neurons that are distributed across piriform cortex without any apparent spatial organization. However, piriform response patterns change substantially over a 100-fold change in odor concentration, apparently degrading the population representation of odor identity. We show that this problem can be resolved by decoding odor identity from a subpopulation of concentration-invariant piriform neurons. These concentration-invariant neurons are overrepresented in piriform cortex but not in olfactory bulb mitral and tufted cells. We therefore propose that distinct perceptual features of odors are encoded in independent subnetworks of neurons in the olfactory cortex. DOI: http://dx.doi.org/10.7554/eLife.26337.001 PMID:28489003
National Emphysema Treatment Trial redux: accentuating the positive.
Sanchez, Pablo Gerardo; Kucharczuk, John Charles; Su, Stacey; Kaiser, Larry Robert; Cooper, Joel David
2010-09-01
Under the Freedom of Information Act, we obtained the follow-up data of the National Emphysema Treatment Trial (NETT) to determine the long-term outcome for "a heterogeneous distribution of emphysema with upper lobe predominance," postulated by the NETT hypothesis to be optimal candidates for lung volume reduction surgery. Using the NETT database, we identified patients with heterogeneous distribution of emphysema with upper lobe predominance and analyzed for the first time follow-up data for those receiving lung volume reduction surgery and those receiving medical management. Furthermore, we compared the results of the NETT reduction surgery group with a previously reported consecutive case series of 250 patients undergoing bilateral lung volume reduction surgery using similar selection criteria. Of the 1218 patients enrolled, 511 (42%) conformed to the NETT hypothesis selection criteria and received the randomly assigned surgical or medical treatment (surgical = 261; medical = 250). Lung volume reduction surgery resulted in a 5-year survival benefit (70% vs 60%; P = .02). Results at 3 years compared with baseline data favored surgical reduction in terms of residual volume reduction (25% vs 2%; P < .001), University of California San Diego dyspnea score (16 vs 0 points; P < .001), and improved St George Respiratory Questionnaire quality of life score (12 points vs 0 points; P < .001). For the 513 patients with a homogeneous pattern of emphysema randomized to surgical or medical treatment, lung volume reduction surgery produced no survival advantage and very limited functional benefit. Patients most likely to benefit from lung volume reduction surgery have heterogeneously distributed emphysema involving the upper lung zones predominantly. Such patients in the NETT trial had results nearly identical to those previously reported in a nonrandomized series of similar patients undergoing lung volume reduction surgery. 2010 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
Effects of learning with explicit elaboration on implicit transfer of visuomotor sequence learning.
Tanaka, Kanji; Watanabe, Katsumi
2013-08-01
Intervals between stimuli and/or responses have significant influences on sequential learning. In the present study, we investigated whether transfer would occur even when the intervals and the visual configurations in a sequence were drastically changed so that participants did not notice that the required sequences of responses were identical. In the experiment, two (or three) sequential button presses comprised a "set," and nine (or six) consecutive sets comprised a "hyperset." In the first session, participants learned either a 2 × 9 or 3 × 6 hyperset by trial and error until they completed it 20 times without error. In the second block, the 2 × 9 (3 × 6) hyperset was changed into the 3 × 6 (2 × 9) hyperset, resulting in different visual configurations and intervals between stimuli and responses. Participants were assigned into two groups: the Identical and Random groups. In the Identical group, the sequence (i.e., the buttons to be pressed) in the second block was identical to that in the first block. In the Random group, a new hyperset was learned. Even in the Identical group, no participants noticed that the sequences were identical. Nevertheless, a significant transfer of performance occurred. However, in the subsequent experiment that did not require explicit trial-and-error learning in the first session, implicit transfer in the second session did not occur. These results indicate that learning with explicit elaboration strengthens the implicit representation of the sequence order as a whole; this might occur independently of the intervals between elements and enable implicit transfer.
On Pfaffian Random Point Fields
NASA Astrophysics Data System (ADS)
Kargin, V.
2014-02-01
We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.
Aita, Takuyo; Husimi, Yuzuru
2003-11-21
We have theoretically studied the statistical properties of adaptive walks (or hill-climbing) on a Mt. Fuji-type fitness landscape in the multi-dimensional sequence space through mathematical analysis and computer simulation. The adaptive walk is characterized by the "mutation distance" d as the step-width of the walker and the "population size" N as the number of randomly generated d-fold point mutants to be screened. In addition to the fitness W, we introduced the following quantities analogous to thermodynamical concepts: "free fitness" G(W) is identical with W+T x S(W), where T is the "evolutionary temperature" T infinity square root of d/lnN and S(W) is the entropy as a function of W, and the "evolutionary force" X is identical with d(G(W)/T)/dW, that is caused by the mutation and selection pressure. It is known that a single adaptive walker rapidly climbs on the fitness landscape up to the stationary state where a "mutation-selection-random drift balance" is kept. In our interpretation, the walker tends to the maximal free fitness state, driven by the evolutionary force X. Our major findings are as follows: First, near the stationary point W*, the "climbing rate" J as the expected fitness change per generation is described by J approximately L x X with L approximately V/2, where V is the variance of fitness distribution on a local landscape. This simple relationship is analogous to the well-known Einstein relation in Brownian motion. Second, the "biological information gain" (DeltaG/T) through adaptive walk can be described by combining the Shannon's information gain (DeltaS) and the "fitness information gain" (DeltaW/T).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
Topics in global convergence of density estimates
NASA Technical Reports Server (NTRS)
Devroye, L.
1982-01-01
The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.
Yiu, Sean; Tom, Brian Dm
2017-01-01
Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.
A randomization approach to handling data scaling in nuclear medicine.
Bai, Chuanyong; Conwell, Richard; Kindem, Joel
2010-06-01
In medical imaging, data scaling is sometimes desired to handle the system complexity, such as uniformity calibration. Since the data are usually saved in short integer, conventional data scaling will first scale the data in floating point format and then truncate or round the floating point data to short integer data. For example, when using truncation, scaling of 9 by 1.1 results in 9 and scaling of 10 by 1.1 results in 11. When the count level is low, such scaling may change the local data distribution and affect the intended application of the data. In this work, the authors use an example gated cardiac SPECT study to illustrate the effect of conventional scaling by factors of 1.1 and 1.2. The authors then scaled the data with the same scaling factors using a randomization approach, in which a random number evenly distributed between 0 and 1 is generated to determine how the floating point data will be saved as short integer data. If the random number is between 0 and 0.9, then 9.9 will be saved as 10, otherwise 9. In other words, the floating point value 9.9 will be saved in short integer value as 10 with 90% probability or 9 with 10% probability. For statistical analysis of the performance, the authors applied the conventional approach with rounding and the randomization approach to 50 consecutive gated studies from a clinical site. For the example study, the image reconstructed from the original data showed an apparent perfusion defect at the apex of the myocardium. The defect size was noticeably changed by scaling with 1.1 and 1.2 using the conventional approaches with truncation and rounding. Using the randomization approach, in contrast, the images from the scaled data appeared identical to the original image. Line profile analysis of the scaled data showed that the randomization approach introduced the least change to the data as compared to the conventional approaches. For the 50 gated data sets, significantly more studies showed quantitative differences between the original images and the images from the data scaled by 1.2 using the rounding approach than the randomization approach [46/50 (92%) versus 3/50 (6%), p < 0.05]. Likewise, significantly more studies showed visually noticeable differences between the original images and the images from the data scaled by 1.2 using the rounding approach than randomization [29/50 (58%) versus 1/50 (2%), p < 0.05]. In conclusion, the proposed randomization approach minimizes the scaling-introduced local data change as compared to the conventional approaches. It is preferred for nuclear medicine data scaling.
Moral identity and emotion in athletes.
Kavussanu, Maria; Willoughby, Adrian; Ring, Christopher
2012-12-01
The purpose of this study was to investigate the effects of moral identity on physiological responses to affective pictures, namely, the startle blink reflex and pain-related evoked potential. Male (n = 48) and female (n = 46) athletes participating in contact team sports were randomly assigned to either a moral identity group or a non-moral identity group and viewed a series of unpleasant, neutral, and pleasant sport-specific pictures. During picture viewing, a noxious electrocutaneous stimulus was delivered as the startle probe and the startle blink and pain-related evoked potential were measured. Upon completion of physiological measures, participants reviewed the pictures and rated them for valence and arousal. ANOVAs revealed that participants in the moral identity group displayed larger startle blinks and smaller pain-related potentials than did those in the non-moral identity group across all picture valence categories. However, the difference in the magnitude of startle blinks between the moral and non-moral identity groups was larger in response to unpleasant than pleasant and neutral pictures. Our findings suggest that moral identity affects physiological responses to sport-specific affective pictures, thereby providing objective evidence for the link between moral identity and emotion in athletes.
NASA Astrophysics Data System (ADS)
Wu, Huaping; Wu, Linzhi; Du, Shanyi
2008-04-01
The effective biaxial modulus (Meff) of fiber-textured hexagonal, tetragonal, and orthorhombic films is estimated by using the Voigt-Reuss-Hill and Vook-Witt grain-interaction models. The orientation distribution function with Gaussian distributions of the two Euler angles θ and ϕ is adopted to analyze the effect of texture dispersion degree on Meff. Numerical results that are based on ZnO, BaTiO3, and yttrium barium copper oxide (YBCO) materials show that the Vook-Witt average of Meff is identical to the Voigt-Reuss-Hill average of Meff for the (001) plane of ideally fiber-textured hexagonal and tetragonal films. The ϕ distribution has no influence on Meff of the (hkl)-fiber-textured hexagonal film at any θ distribution in terms of the isotropy in the plane perpendicular to the [001] direction. Comparably, tetragonal and orthorhombic films represent considerable actions of ϕ dispersion on Meff, and the effect of ϕ dispersion on Meff of a (001)-fiber-textured YBCO film is smaller than that for a (001)-fiber-textured BaTiO3 film since the shear anisotropic factor in the (001) shear plane of a YBCO film more closely approaches 1. Enhanced θ and ϕ distributions destroy the perfect fiber textures, and as a result, the films exhibit an evolution from ideal (hkl) fiber textures to random textures with varying full widths at half maximums of θ and ϕ.
How preview space/time translates into preview cost/benefit for fixation durations during reading.
Kliegl, Reinhold; Hohenstein, Sven; Yan, Ming; McDonald, Scott A
2013-01-01
Eye-movement control during reading depends on foveal and parafoveal information. If the parafoveal preview of the next word is suppressed, reading is less efficient. A linear mixed model (LMM) reanalysis of McDonald (2006) confirmed his observation that preview benefit may be limited to parafoveal words that have been selected as the saccade target. Going beyond the original analyses, in the same LMM, we examined how the preview effect (i.e., the difference in single-fixation duration, SFD, between random-letter and identical preview) depends on the gaze duration on the pretarget word and on the amplitude of the saccade moving the eye onto the target word. There were two key results: (a) The shorter the saccade amplitude (i.e., the larger preview space), the shorter a subsequent SFD with an identical preview; this association was not observed with a random-letter preview. (b) However, the longer the gaze duration on the pretarget word, the longer the subsequent SFD on the target, with the difference between random-letter string and identical previews increasing with preview time. A third pattern-increasing cost of a random-letter string in the parafovea associated with shorter saccade amplitudes-was observed for target gaze durations. Thus, LMMs revealed that preview effects, which are typically summarized under "preview benefit", are a complex mixture of preview cost and preview benefit and vary with preview space and preview time. The consequence for reading is that parafoveal preview may not only facilitate, but also interfere with lexical access.
Distributed acoustic cues for caller identity in macaque vocalization.
Fukushima, Makoto; Doyle, Alex M; Mullarkey, Matthew P; Mishkin, Mortimer; Averbeck, Bruno B
2015-12-01
Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured 'coo' call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral-temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call's fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized.
Distributed acoustic cues for caller identity in macaque vocalization
Doyle, Alex M.; Mullarkey, Matthew P.; Mishkin, Mortimer; Averbeck, Bruno B.
2015-01-01
Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured ‘coo’ call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral–temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call’s fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized. PMID:27019727
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
Robustness of chimera states in complex dynamical systems
Yao, Nan; Huang, Zi-Gang; Lai, Ying-Cheng; Zheng, Zhi-Gang
2013-01-01
The remarkable phenomenon of chimera state in systems of non-locally coupled, identical oscillators has attracted a great deal of recent theoretical and experimental interests. In such a state, different groups of oscillators can exhibit characteristically distinct types of dynamical behaviors, in spite of identity of the oscillators. But how robust are chimera states against random perturbations to the structure of the underlying network? We address this fundamental issue by studying the effects of random removal of links on the probability for chimera states. Using direct numerical calculations and two independent theoretical approaches, we find that the likelihood of chimera state decreases with the probability of random-link removal. A striking finding is that, even when a large number of links are removed so that chimera states are deemed not possible, in the state space there are generally both coherent and incoherent regions. The regime of chimera state is a particular case in which the oscillators in the coherent region happen to be synchronized or phase-locked. PMID:24343533
A model study of aggregates composed of spherical soot monomers with an acentric carbon shell
NASA Astrophysics Data System (ADS)
Luo, Jie; Zhang, Yongming; Zhang, Qixing
2018-01-01
Influences of morphology on the optical properties of soot particles have gained increasing attentions. However, studies on the effect of the way primary particles are coated on the optical properties is few. Aimed to understand how the primary particles are coated affect the optical properties of soot particles, the coated soot particle was simulated using the acentric core-shell monomers model (ACM), which was generated by randomly moving the cores of concentric core-shell monomers (CCM) model. Single scattering properties of the CCM model with identical fractal parameters were calculated 50 times at first to evaluate the optical diversities of different realizations of fractal aggregates with identical parameters. The results show that optical diversities of different realizations for fractal aggregates with identical parameters cannot be eliminated by averaging over ten random realizations. To preserve the fractal characteristics, 10 realizations of each model were generated based on the identical 10 parent fractal aggregates, and then the results were averaged over each 10 realizations, respectively. The single scattering properties of all models were calculated using the numerically exact multiple-sphere T-matrix (MSTM) method. It is found that the single scattering properties of randomly coated soot particles calculated using the ACM model are extremely close to those using CCM model and homogeneous aggregate (HA) model using Maxwell-Garnett effective medium theory. Our results are different from previous studies. The reason may be that the differences in previous studies were caused by fractal characteristics but not models. Our findings indicate that how the individual primary particles are coated has little effect on the single scattering properties of soot particles with acentric core-shell monomers. This work provides a suggestion for scattering model simplification and model selection.
Buck, Patrick M.; Kumar, Sandeep; Singh, Satish K.
2013-01-01
The various roles that aggregation prone regions (APRs) are capable of playing in proteins are investigated here via comprehensive analyses of multiple non-redundant datasets containing randomly generated amino acid sequences, monomeric proteins, intrinsically disordered proteins (IDPs) and catalytic residues. Results from this study indicate that the aggregation propensities of monomeric protein sequences have been minimized compared to random sequences with uniform and natural amino acid compositions, as observed by a lower average aggregation propensity and fewer APRs that are shorter in length and more often punctuated by gate-keeper residues. However, evidence for evolutionary selective pressure to disrupt these sequence regions among homologous proteins is inconsistent. APRs are less conserved than average sequence identity among closely related homologues (≥80% sequence identity with a parent) but APRs are more conserved than average sequence identity among homologues that have at least 50% sequence identity with a parent. Structural analyses of APRs indicate that APRs are three times more likely to contain ordered versus disordered residues and that APRs frequently contribute more towards stabilizing proteins than equal length segments from the same protein. Catalytic residues and APRs were also found to be in structural contact significantly more often than expected by random chance. Our findings suggest that proteins have evolved by optimizing their risk of aggregation for cellular environments by both minimizing aggregation prone regions and by conserving those that are important for folding and function. In many cases, these sequence optimizations are insufficient to develop recombinant proteins into commercial products. Rational design strategies aimed at improving protein solubility for biotechnological purposes should carefully evaluate the contributions made by candidate APRs, targeted for disruption, towards protein structure and activity. PMID:24146608
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
NASA Astrophysics Data System (ADS)
Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi
2015-12-01
This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.
Stochastic summation of empirical Green's functions
Wennerberg, Leif
1990-01-01
Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
This paper describes and illustrates two ways of performing time-correlated gust-load calculations. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.; Zeiler, Thomas A.; Perry, Boyd, III
1989-01-01
Two ways of performing time-correlated gust-load calculations are described and illustrated. The first is based on Matched Filter Theory; the second on Random Process Theory. Both approaches yield theoretically identical results and represent novel applications of the theories, are computationally fast, and may be applied to other dynamic-response problems. A theoretical development and example calculations using both Matched Filter Theory and Random Process Theory approaches are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, J .Y. Peter; Sham, Tsun-Kong; Chakrabarti, Subrata
2009-12-01
Hemochromatosis is a genetic disorder that causes body to store excess iron in organs such as heart or liver. Distribution of iron, as well as copper, zinc and calcium, and chemical identity of iron in hemochromatosis liver and intestine were investigated by X-ray microprobe experiments, which consist of X-ray microscopy and micro-X-ray absorption fine structure. Our results show that iron concentration in hemochromatosis liver tissue is high, while much less Fe is found in intestinal tissue. Moreover, chemical identity of Fe in hemochromatosis liver can be identified. X-ray microprobe experiments allows for examining elemental distribution at an excellent spatial resolution.more » Moreover, chemical identity of element of interest can be obtained.« less
Asymptotic Normality Through Factorial Cumulants and Partition Identities
Bobecka, Konstancja; Hitczenko, Paweł; López-Blázquez, Fernando; Rempała, Grzegorz; Wesołowski, Jacek
2013-01-01
In the paper we develop an approach to asymptotic normality through factorial cumulants. Factorial cumulants arise in the same manner from factorial moments as do (ordinary) cumulants from (ordinary) moments. Another tool we exploit is a new identity for ‘moments’ of partitions of numbers. The general limiting result is then used to (re-)derive asymptotic normality for several models including classical discrete distributions, occupancy problems in some generalized allocation schemes and two models related to negative multinomial distribution. PMID:24591773
Evolution de configurations de tourbillons avec les mêmes invariants globaux
NASA Astrophysics Data System (ADS)
Bécu, Emilie; Pavlov, Vadim
2004-10-01
In this Note, we address the question of the evolution of a distribution of N identical localized vortices. Using direct numerical simulation, (here the Runge-Kutta scheme of order 4), together with the localized-vortices model, we show that different initial distributions of vorticity with identical integral invariants may exist. We show that the initial configurations with the same invariants may evolve to totally different quasi-final states. To cite this article: E. Bécu, V. Pavlov, C. R. Mecanique 332 (2004).
Random and non-random monoallelic expression.
Chess, Andrew
2013-01-01
Monoallelic expression poses an intriguing problem in epigenetics because it requires the unequal treatment of two segments of DNA that are present in the same nucleus and which can have absolutely identical sequences. This review will consider different known types of monoallelic expression. For all monoallelically expressed genes, their respective allele-specific patterns of expression have the potential to affect brain function and dysfunction.
Know your neighbor: The impact of social context on fairness behavior
2018-01-01
Laboratory experiments offer an opportunity to isolate human behaviors with a level of precision that is often difficult to obtain using other (survey-based) methods. Yet, experimental tasks are often stripped of any social context, implying that inferences may not directly map to real world contexts. We randomly allocate 632 individuals (grouped randomly into 316 dyads) from small villages in Sierra Leone to four versions of the ultimatum game. In addition to the classic ultimatum game, where both the sender and receiver are anonymous, we reveal the identity of the sender, the receiver or both. This design allows us to explore how fairness behavior is affected by social context in a natural setting where players are drawn from populations that are well-acquainted. We find that average offers increase when the receiver’s identity is revealed, suggesting that anonymous ultimatum games underestimate expected fair offers. This study suggest that researchers wishing to relate laboratory behavior to contexts in which the participants are well-acquainted should consider revealing the identities of the players during game play. PMID:29641584
Facilitated sequence counting and assembly by template mutagenesis
Levy, Dan; Wigler, Michael
2014-01-01
Presently, inferring the long-range structure of the DNA templates is limited by short read lengths. Accurate template counts suffer from distortions occurring during PCR amplification. We explore the utility of introducing random mutations in identical or nearly identical templates to create distinguishable patterns that are inherited during subsequent copying. We simulate the applications of this process under assumptions of error-free sequencing and perfect mapping, using cytosine deamination as a model for mutation. The simulations demonstrate that within readily achievable conditions of nucleotide conversion and sequence coverage, we can accurately count the number of otherwise identical molecules as well as connect variants separated by long spans of identical sequence. We discuss many potential applications, such as transcript profiling, isoform assembly, haplotype phasing, and de novo genome assembly. PMID:25313059
Achieving Privacy in a Federated Identity Management System
NASA Astrophysics Data System (ADS)
Landau, Susan; Le van Gong, Hubert; Wilton, Robin
Federated identity management allows a user to efficiently authenticate and use identity information from data distributed across multiple domains. The sharing of data across domains blurs security boundaries and potentially creates privacy risks. We examine privacy risks and fundamental privacy protections of federated identity- management systems. The protections include minimal disclosure and providing PII only on a “need-to-know” basis. We then look at the Liberty Alliance system and analyze previous privacy critiques of that system. We show how law and policy provide privacy protections in federated identity-management systems, and that privacy threats are best handled using a combination of technology and law/policy tools.
2009-03-01
IN WIRELESS SENSOR NETWORKS WITH RANDOMLY DISTRIBUTED ELEMENTS UNDER MULTIPATH PROPAGATION CONDITIONS by Georgios Tsivgoulis March 2009...COVERED Engineer’s Thesis 4. TITLE Source Localization in Wireless Sensor Networks with Randomly Distributed Elements under Multipath Propagation...the non-line-of-sight information. 15. NUMBER OF PAGES 111 14. SUBJECT TERMS Wireless Sensor Network , Direction of Arrival, DOA, Random
Clinician judgment in the diagnosis of gender identity disorder in children.
Ehrbar, Randall D; Witty, Marjorie C; Ehrbar, Hans G; Bockting, Walter O
2008-01-01
Clinician judgment methodology was used to explore the influence of gender nonconformity and gender dysphoria on the diagnosis of children with Gender Identity Disorder (GID). A convenience sample of 73 licensed psychologists randomly received a vignette to diagnose. Vignettes varied across sex of child, gender conforming behavior, and gender dysphoria (including all possible permutations). Eight percent of respondents given a vignette involving a child who met purely behavioral criteria for GID diagnosed the child with GID. When additional information was provided, which in addition to gender nonconforming behavior the child also self-reported a cross-gender identity, this increased to 27% (significant at 5%).
Spread of English and Westernization in Saudi Arabia.
ERIC Educational Resources Information Center
Al-Abed, Fawwaz; And Others
1996-01-01
A questionnaire was distributed to Saudi Arabian undergraduates in order to investigate their attitudes toward Westernization, national identity, and religious commitment. Results revealed that learning English did not "Westernize" students nor weaken national identity. Implications and recommendations for establishing a rigid language…
Entropy Inequalities for Stable Densities and Strengthened Central Limit Theorems
NASA Astrophysics Data System (ADS)
Toscani, Giuseppe
2016-10-01
We consider the central limit theorem for stable laws in the case of the standardized sum of independent and identically distributed random variables with regular probability density function. By showing decay of different entropy functionals along the sequence we prove convergence with explicit rate in various norms to a Lévy centered density of parameter λ >1 . This introduces a new information-theoretic approach to the central limit theorem for stable laws, in which the main argument is shown to be the relative fractional Fisher information, recently introduced in Toscani (Ricerche Mat 65(1):71-91, 2016). In particular, it is proven that, with respect to the relative fractional Fisher information, the Lévy density satisfies an analogous of the logarithmic Sobolev inequality, which allows to pass from the monotonicity and decay to zero of the relative fractional Fisher information in the standardized sum to the decay to zero in relative entropy with an explicit decay rate.
Modelling students' knowledge organisation: Genealogical conceptual networks
NASA Astrophysics Data System (ADS)
Koponen, Ismo T.; Nousiainen, Maija
2018-04-01
Learning scientific knowledge is largely based on understanding what are its key concepts and how they are related. The relational structure of concepts also affects how concepts are introduced in teaching scientific knowledge. We model here how students organise their knowledge when they represent their understanding of how physics concepts are related. The model is based on assumptions that students use simple basic linking-motifs in introducing new concepts and mostly relate them to concepts that were introduced a few steps earlier, i.e. following a genealogical ordering. The resulting genealogical networks have relatively high local clustering coefficients of nodes but otherwise resemble networks obtained with an identical degree distribution of nodes but with random linking between them (i.e. the configuration-model). However, a few key nodes having a special structural role emerge and these nodes have a higher than average communicability betweenness centralities. These features agree with the empirically found properties of students' concept networks.
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
Independent functions and the geometry of Banach spaces
NASA Astrophysics Data System (ADS)
Astashkin, Sergey V.; Sukochev, Fedor A.
2010-12-01
The main objective of this survey is to present the `state of the art' of those parts of the theory of independent functions which are related to the geometry of function spaces. The `size' of a sum of independent functions is estimated in terms of classical moments and also in terms of general symmetric function norms. The exposition is centred on the Rosenthal inequalities and their various generalizations and sharp conditions under which the latter hold. The crucial tool here is the recently developed construction of the Kruglov operator. The survey also provides a number of applications to the geometry of Banach spaces. In particular, variants of the classical Khintchine-Maurey inequalities, isomorphisms between symmetric spaces on a finite interval and on the semi-axis, and a description of the class of symmetric spaces with any sequence of symmetrically and identically distributed independent random variables spanning a Hilbert subspace are considered. Bibliography: 87 titles.
Physical Watermarking for Securing Cyber-Physical Systems via Packet Drop Injections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozel, Omur; Weekrakkody, Sean; Sinopoli, Bruno
Physical watermarking is a well known solution for detecting integrity attacks on Cyber-Physical Systems (CPSs) such as the smart grid. Here, a random control input is injected into the system in order to authenticate physical dynamics and sensors which may have been corrupted by adversaries. Packet drops may naturally occur in a CPS due to network imperfections. To our knowledge, previous work has not considered the role of packet drops in detecting integrity attacks. In this paper, we investigate the merit of injecting Bernoulli packet drops into the control inputs sent to actuators as a new physical watermarking scheme. Withmore » the classical linear quadratic objective function and an independent and identically distributed packet drop injection sequence, we study the effect of packet drops on meeting security and control objectives. Our results indicate that the packet drops could act as a potential physical watermark for attack detection in CPSs.« less
On the Wigner law in dilute random matrices
NASA Astrophysics Data System (ADS)
Khorunzhy, A.; Rodgers, G. J.
1998-12-01
We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.
ERIC Educational Resources Information Center
Larwin, Karen H.; Larwin, David A.
2011-01-01
Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…
NASA Astrophysics Data System (ADS)
Gromov, Yu Yu; Minin, Yu V.; Ivanova, O. G.; Morozova, O. N.
2018-03-01
Multidimensional discrete distributions of probabilities of independent random values were received. Their one-dimensional distribution is widely used in probability theory. Producing functions of those multidimensional distributions were also received.
A Practical Approach to Identity on Digital Ecosystems Using Claim Verification and Trust
NASA Astrophysics Data System (ADS)
McLaughlin, Mark; Malone, Paul
Central to the ethos of digital ecosystems (DEs) is that DEs should be distributed and have no central points of failure or control. This essentially mandates a decentralised system, which poses significant challenges for identity. Identity in decentralised environments must be treated very differently to identity in traditional environments, where centralised naming, authentication and authorisation can be assumed, and where identifiers can be considered global and absolute. In the absence of such guarantees we have expanded on the OPAALS identity model to produce a general implementation for the OPAALS DE that uses a combination of identity claim verification protocols and trust to give assurances in place of centralised servers. We outline how the components of this implementation function and give an illustrated workflow of how identity issues are solved on the OPAALS DE in practice.
Computer simulation of random variables and vectors with arbitrary probability distribution laws
NASA Technical Reports Server (NTRS)
Bogdan, V. M.
1981-01-01
Assume that there is given an arbitrary n-dimensional probability distribution F. A recursive construction is found for a sequence of functions x sub 1 = f sub 1 (U sub 1, ..., U sub n), ..., x sub n = f sub n (U sub 1, ..., U sub n) such that if U sub 1, ..., U sub n are independent random variables having uniform distribution over the open interval (0,1), then the joint distribution of the variables x sub 1, ..., x sub n coincides with the distribution F. Since uniform independent random variables can be well simulated by means of a computer, this result allows one to simulate arbitrary n-random variables if their joint probability distribution is known.
Random and Non-Random Monoallelic Expression
Chess, Andrew
2013-01-01
Monoallelic expression poses an intriguing problem in epigenetics because it requires the unequal treatment of two segments of DNA that are present in the same nucleus and which can have absolutely identical sequences. This review will consider different known types of monoallelic expression. For all monoallelically expressed genes, their respective allele-specific patterns of expression have the potential to affect brain function and dysfunction. PMID:22763620
The Evolution of Random Number Generation in MUVES
2017-01-01
mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results identical to the current...MUVES, includ- ing the mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results...questionable numerical and statistical properties. The development of the modern system is traced through software change requests, resulting in a random number
Computer simulation results for bounds on the effective conductivity of composite media
NASA Astrophysics Data System (ADS)
Smith, P. A.; Torquato, S.
1989-02-01
This paper studies the determination of third- and fourth-order bounds on the effective conductivity σe of a composite material composed of aligned, infinitely long, identical, partially penetrable, circular cylinders of conductivity σ2 randomly distributed throughout a matrix of conductivity σ1. Both bounds involve the microstructural parameter ζ2 which is a multifold integral that depends upon S3, the three-point probability function of the composite. This key integral ζ2 is computed (for the possible range of cylinder volume fraction φ2) using a Monte Carlo simulation technique for the penetrable-concentric-shell model in which cylinders are distributed with an arbitrary degree of impenetrability λ, 0≤λ≤1. Results for the limiting cases λ=0 (``fully penetrable'' or randomly centered cylinders) and λ=1 (``totally impenetrable'' cylinders) compare very favorably with theoretical predictions made by Torquato and Beasley [Int. J. Eng. Sci. 24, 415 (1986)] and by Torquato and Lado [Proc. R. Soc. London Ser. A 417, 59 (1988)], respectively. Results are also reported for intermediate values of λ: cases which heretofore have not been examined. For a wide range of α=σ2/σ1 (conductivity ratio) and φ2, the third-order bounds on σe significantly improve upon second-order bounds which just depend upon φ2. The fourth-order bounds are, in turn, narrower than the third-order bounds. Moreover, when the cylinders are highly conducting (α≫1), the fourth-order lower bound provides an excellent estimate of the effective conductivity for a wide range of volume fractions.
NASA Astrophysics Data System (ADS)
Maćkowiak-Pawłowska, Maja; Przybyła, Piotr
2018-05-01
The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for n=2 and n=3. Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.
Stochastic space interval as a link between quantum randomness and macroscopic randomness?
NASA Astrophysics Data System (ADS)
Haug, Espen Gaarder; Hoff, Harald
2018-03-01
For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).
Distributed Detection with Collisions in a Random, Single-Hop Wireless Sensor Network
2013-05-26
public release; distribution is unlimited. Distributed detection with collisions in a random, single-hop wireless sensor network The views, opinions...1274 2 ABSTRACT Distributed detection with collisions in a random, single-hop wireless sensor network Report Title We consider the problem of... WIRELESS SENSOR NETWORK Gene T. Whipps?† Emre Ertin† Randolph L. Moses† ?U.S. Army Research Laboratory, Adelphi, MD 20783 †The Ohio State University
Goal-directedness and personal identity as correlates of life outcomes.
Goldman, Barry M; Masterson, Suzanne S; Locke, Edwin A; Groth, Markus; Jensen, David G
2002-08-01
Although much research has been conducted on goal setting, researchers have not examined goal-directedness or propensity to set goals as a stable human characteristic in adults. In this study, a survey was developed and distributed to 104 adult participants to assess their goal-directedness, personal identity, and various life outcomes. A theoretical model was developed and tested using structural equation modeling that proposed that both goal-directedness and personal identity should positivcly influence important life outcomes. Analysis showed that goal-directedness and personal identity are positively related to personal well-being, salary, and marital satisfaction. Further, personal identity was positively related to job satisfaction but, contrary to related research, goal-directedness did not predict job satisfaction.
Kahn, Kimberly Barsamian; Lee, J Katherine; Renauer, Brian; Henning, Kris R; Stewart, Greg
2017-01-01
This study examines the role of perceived phenotypic racial stereotypicality and race-based social identity threat on racial minorities' trust and cooperation with police. We hypothesize that in police interactions, racial minorities' phenotypic racial stereotypicality may increase race-based social identity threat, which will lead to distrust and decreased participation with police. Racial minorities (Blacks, Latinos, Native Americans, and multi-racials) and Whites from a representative random sample of city residents were surveyed about policing attitudes. A serial multiple mediation model confirmed that racial minorities' self-rated phenotypic racial stereotypicality indirectly affected future cooperation through social identity threat and trust. Due to the lack of negative group stereotypes in policing, the model did not hold for Whites. This study provides evidence that phenotypic stereotypicality influences racial minorities' psychological experiences interacting with police.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve
2009-09-08
We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.
A computational framework to empower probabilistic protein design
Fromer, Menachem; Yanover, Chen
2008-01-01
Motivation: The task of engineering a protein to perform a target biological function is known as protein design. A commonly used paradigm casts this functional design problem as a structural one, assuming a fixed backbone. In probabilistic protein design, positional amino acid probabilities are used to create a random library of sequences to be simultaneously screened for biological activity. Clearly, certain choices of probability distributions will be more successful in yielding functional sequences. However, since the number of sequences is exponential in protein length, computational optimization of the distribution is difficult. Results: In this paper, we develop a computational framework for probabilistic protein design following the structural paradigm. We formulate the distribution of sequences for a structure using the Boltzmann distribution over their free energies. The corresponding probabilistic graphical model is constructed, and we apply belief propagation (BP) to calculate marginal amino acid probabilities. We test this method on a large structural dataset and demonstrate the superiority of BP over previous methods. Nevertheless, since the results obtained by BP are far from optimal, we thoroughly assess the paradigm using high-quality experimental data. We demonstrate that, for small scale sub-problems, BP attains identical results to those produced by exact inference on the paradigmatic model. However, quantitative analysis shows that the distributions predicted significantly differ from the experimental data. These findings, along with the excellent performance we observed using BP on the smaller problems, suggest potential shortcomings of the paradigm. We conclude with a discussion of how it may be improved in the future. Contact: fromer@cs.huji.ac.il PMID:18586717
The what-where trade-off in multiple-identity tracking.
Cohen, Michael A; Pinto, Yair; Howe, Piers D L; Horowitz, Todd S
2011-07-01
Observers are poor at reporting the identities of objects that they have successfully tracked (Pylyshyn, Visual Cognition, 11, 801-822, 2004; Scholl & Pylyshyn, Cognitive Psychology, 38, 259-290, 1999). Consequently, it has been claimed that objects are tracked in a manner that does not encode their identities (Pylyshyn, 2004). Here, we present evidence that disputes this claim. In a series of experiments, we show that attempting to track the identities of objects can decrease an observer's ability to track the objects' locations. This indicates that the mechanisms that track, respectively, the locations and identities of objects draw upon a common resource. Furthermore, we show that this common resource can be voluntarily distributed between the two mechanisms. This is clear evidence that the location- and identity-tracking mechanisms are not entirely dissociable.
Ishibashi, Chika; Horiguchi, Itsuko; Sumikura, Hiroyuki; Inada, Eiichi
2014-12-01
In Japan, it has been thought that pain during labor develops maternal identity and there are cultural and psychological barriers to the use of epidural labor analgesia. The objective of this study was to examine epidemiologic data and psychological data about satisfaction with delivery and maternal identity with epidural labor analgesia. A web-based survey was randomly conducted in 1,000 women (ages, 20-40 years) with children under the age of 3 years. The questionnaire included the basic characteristics of the participants and children, their experiences with delivery and two scales to evaluate satisfaction of delivery and maternal identity. There were a total of 1,030 respondents and 50 (5.0%) respondents reported having epidural labor analgesia. Scores about self-evaluation scales for satisfaction of delivery and maternal identity among women of epidural labor analgesia were not significantly different with those among women of spontaneous delivery. Satisfaction with delivery and maternal identity are not influenced by chosing epidural labor analgesia.
Gjini, Erida; Haydon, Daniel T; David Barry, J; Cobbold, Christina A
2014-01-21
Genetic diversity in multigene families is shaped by multiple processes, including gene conversion and point mutation. Because multi-gene families are involved in crucial traits of organisms, quantifying the rates of their genetic diversification is important. With increasing availability of genomic data, there is a growing need for quantitative approaches that integrate the molecular evolution of gene families with their higher-scale function. In this study, we integrate a stochastic simulation framework with population genetics theory, namely the diffusion approximation, to investigate the dynamics of genetic diversification in a gene family. Duplicated genes can diverge and encode new functions as a result of point mutation, and become more similar through gene conversion. To model the evolution of pairwise identity in a multigene family, we first consider all conversion and mutation events in a discrete manner, keeping track of their details and times of occurrence; second we consider only the infinitesimal effect of these processes on pairwise identity accounting for random sampling of genes and positions. The purely stochastic approach is closer to biological reality and is based on many explicit parameters, such as conversion tract length and family size, but is more challenging analytically. The population genetics approach is an approximation accounting implicitly for point mutation and gene conversion, only in terms of per-site average probabilities. Comparison of these two approaches across a range of parameter combinations reveals that they are not entirely equivalent, but that for certain relevant regimes they do match. As an application of this modelling framework, we consider the distribution of nucleotide identity among VSG genes of African trypanosomes, representing the most prominent example of a multi-gene family mediating parasite antigenic variation and within-host immune evasion. © 2013 Published by Elsevier Ltd. All rights reserved.
Quantum random number generator based on quantum nature of vacuum fluctuations
NASA Astrophysics Data System (ADS)
Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.
2017-11-01
Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.
Managing identity impacts associated with disclosure of HIV status: a qualitative investigation
Frye, Victoria; Fortin, Princess; MacKenzie, Sonja; Purcell, David; Edwards, Lorece V.; Mitchell, Shannon Gwin; Valverde, Eduardo; Garfein, R.; Metsch, Lisa; Latka, Mary H
2011-01-01
Disclosure of HIV status to potential and current sex partners by HIV-positive people (HIVPP) is a complex issue that has received a significant amount of attention. Research has found that disclosure depends upon the evaluation by HIVPP of potential benefits and risks, especially of the risks stemming from the profound social stigma of HIV and AIDS. Drawing on concepts from Goffman’s classic stigma theory and Anderson’s more recently developed cultural-identity theory of drug abuse, we analyzed data from in-depth, post-intervention qualitative interviews with 116 heterosexually active, HIV-positive injection drug users enrolled in a randomized trial of a behavioral intervention to prevent HIV transmission. We explored how disclosure experiences lead to “identity impacts” defined as: (1) identity challenges (i.e. interactions that challenge an individual’s self-concept as a “normal” or non-deviant individual); and (2) identity transformations (i.e. processes whereby an individual comes to embrace a new identity and reject behaviors and values of an old one, resulting in the conscious adoption of a social and/or public identity as an HIV-positive individual). Participants engaged in several strategies to manage the identity impacts associated with disclosure. Implications of these findings for research and prevention programming are discussed. PMID:20024764
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Forgetting What Was Where: The Fragility of Object-Location Binding
Pertzov, Yoni; Dong, Mia Yuan; Peich, Muy-Cheng; Husain, Masud
2012-01-01
Although we frequently take advantage of memory for objects locations in everyday life, understanding how an object’s identity is bound correctly to its location remains unclear. Here we examine how information about object identity, location and crucially object-location associations are differentially susceptible to forgetting, over variable retention intervals and memory load. In our task, participants relocated objects to their remembered locations using a touchscreen. When participants mislocalized objects, their reports were clustered around the locations of other objects in the array, rather than occurring randomly. These ‘swap’ errors could not be attributed to simple failure to remember either the identity or location of the objects, but rather appeared to arise from failure to bind object identity and location in memory. Moreover, such binding failures significantly contributed to decline in localization performance over retention time. We conclude that when objects are forgotten they do not disappear completely from memory, but rather it is the links between identity and location that are prone to be broken over time. PMID:23118956
Failure-Time Distribution Of An m-Out-of-n System
NASA Technical Reports Server (NTRS)
Scheuer, Ernest M.
1988-01-01
Formulas for reliability extended to more general cases. Useful in analyses of reliabilities of practical systems and structures, especially of redundant systems of identical components, among which operating loads distributed equally.
Heavy-tailed distribution of cyber-risks
NASA Astrophysics Data System (ADS)
Maillart, T.; Sornette, D.
2010-06-01
With the development of the Internet, new kinds of massive epidemics, distributed attacks, virtual conflicts and criminality have emerged. We present a study of some striking statistical properties of cyber-risks that quantify the distribution and time evolution of information risks on the Internet, to understand their mechanisms, and create opportunities to mitigate, control, predict and insure them at a global scale. First, we report an exceptionnaly stable power-law tail distribution of personal identity losses per event, Pr(ID loss ≥ V) ~ 1/Vb, with b = 0.7 ± 0.1. This result is robust against a surprising strong non-stationary growth of ID losses culminating in July 2006 followed by a more stationary phase. Moreover, this distribution is identical for different types and sizes of targeted organizations. Since b < 1, the cumulative number of all losses over all events up to time t increases faster-than-linear with time according to ≃ t1/b, suggesting that privacy, characterized by personal identities, is necessarily becoming more and more insecure. We also show the existence of a size effect, such that the largest possible ID losses per event grow faster-than-linearly as ~S1.3 with the organization size S. The small value b ≃ 0.7 of the power law distribution of ID losses is explained by the interplay between Zipf’s law and the size effect. We also infer that compromised entities exhibit basically the same probability to incur a small or large loss.
Probability distribution of the entanglement across a cut at an infinite-randomness fixed point
NASA Astrophysics Data System (ADS)
Devakul, Trithep; Majumdar, Satya N.; Huse, David A.
2017-03-01
We calculate the probability distribution of entanglement entropy S across a cut of a finite one-dimensional spin chain of length L at an infinite-randomness fixed point using Fisher's strong randomness renormalization group (RG). Using the random transverse-field Ising model as an example, the distribution is shown to take the form p (S |L ) ˜L-ψ (k ) , where k ≡S /ln[L /L0] , the large deviation function ψ (k ) is found explicitly, and L0 is a nonuniversal microscopic length. We discuss the implications of such a distribution on numerical techniques that rely on entanglement, such as matrix-product-state-based techniques. Our results are verified with numerical RG simulations, as well as the actual entanglement entropy distribution for the random transverse-field Ising model which we calculate for large L via a mapping to Majorana fermions.
A scaling law for random walks on networks
Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-01-01
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870
Partial transpose of random quantum states: Exact formulas and meanders
NASA Astrophysics Data System (ADS)
Fukuda, Motohisa; Śniady, Piotr
2013-04-01
We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.
A scaling law for random walks on networks
NASA Astrophysics Data System (ADS)
Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-10-01
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.
A scaling law for random walks on networks.
Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick
2014-10-14
The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.
Turbulence hierarchy in a random fibre laser
González, Iván R. Roa; Lima, Bismarck C.; Pincheira, Pablo I. R.; Brum, Arthur A.; Macêdo, Antônio M. S.; Vasconcelos, Giovani L.; de S. Menezes, Leonardo; Raposo, Ernesto P.; Gomes, Anderson S. L.; Kashyap, Raman
2017-01-01
Turbulence is a challenging feature common to a wide range of complex phenomena. Random fibre lasers are a special class of lasers in which the feedback arises from multiple scattering in a one-dimensional disordered cavity-less medium. Here we report on statistical signatures of turbulence in the distribution of intensity fluctuations in a continuous-wave-pumped erbium-based random fibre laser, with random Bragg grating scatterers. The distribution of intensity fluctuations in an extensive data set exhibits three qualitatively distinct behaviours: a Gaussian regime below threshold, a mixture of two distributions with exponentially decaying tails near the threshold and a mixture of distributions with stretched-exponential tails above threshold. All distributions are well described by a hierarchical stochastic model that incorporates Kolmogorov’s theory of turbulence, which includes energy cascade and the intermittence phenomenon. Our findings have implications for explaining the remarkably challenging turbulent behaviour in photonics, using a random fibre laser as the experimental platform. PMID:28561064
Explicit equilibria in a kinetic model of gambling
NASA Astrophysics Data System (ADS)
Bassetti, F.; Toscani, G.
2010-06-01
We introduce and discuss a nonlinear kinetic equation of Boltzmann type which describes the evolution of wealth in a pure gambling process, where the entire sum of wealths of two agents is up for gambling, and randomly shared between the agents. For this equation the analytical form of the steady states is found for various realizations of the random fraction of the sum which is shared to the agents. Among others, the exponential distribution appears as steady state in case of a uniformly distributed random fraction, while Gamma distribution appears for a random fraction which is Beta distributed. The case in which the gambling game is only conservative-in-the-mean is shown to lead to an explicit heavy tailed distribution.
Narrow-band generation in random distributed feedback fiber laser.
Sugavanam, Srikanth; Tarasov, Nikita; Shu, Xuewen; Churkin, Dmitry V
2013-07-15
Narrow-band emission of spectral width down to ~0.05 nm line-width is achieved in the random distributed feedback fiber laser employing narrow-band fiber Bragg grating or fiber Fabry-Perot interferometer filters. The observed line-width is ~10 times less than line-width of other demonstrated up to date random distributed feedback fiber lasers. The random DFB laser with Fabry-Perot interferometer filter provides simultaneously multi-wavelength and narrow-band (within each line) generation with possibility of further wavelength tuning.
Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution
NASA Astrophysics Data System (ADS)
Zhao, Chen; Sichitiu, Mihail L.
Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.
NASA Astrophysics Data System (ADS)
Slanina, J.; Möls, J. J.; Baard, J. H.
The results of a wet deposition monitoring experiment, carried out by eight identical wet-only precipitation samplers operating on the basis of 24 h samples, have been used to investigate the accuracy and uncertainties in wet deposition measurements. The experiment was conducted near Lelystad, The Netherlands over the period 1 March 1983-31 December 1985. By rearranging the data for one to eight samplers and sampling periods of 1 day to 1 month both systematic and random errors were investigated as a function of measuring strategy. A Gaussian distribution of the results was observed. Outliers, detected by a Dixon test ( a = 0.05) influenced strongly both the yearly averaged results and the standard deviation of this average as a function of the number of samplers and the length of the sampling period. The systematic bias in bulk elements, using one sampler, varies typically from 2 to 20% and for trace elements from 10 to 500%, respectively. Severe problems are encountered in the case of Zn, Cu, Cr, Ni and especially Cd. For the sensitive detection of trends generally more than one sampler per measuring station is necessary as the standard deviation in the yearly averaged wet deposition is typically 10-20% relative for one sampler. Using three identical samplers, trends of, e.g. 3% per year will be generally detected in 6 years.
IS THE SUICIDE RATE A RANDOM WALK?
Yang, Bijou; Lester, David; Lyke, Jennifer; Olsen, Robert
2015-06-01
The yearly suicide rates for the period 1933-2010 and the daily suicide numbers for 1990 and 1991 were examined for whether the distribution of difference scores (from year to year and from day to day) fitted a normal distribution, a characteristic of stochastic processes that follow a random walk. If the suicide rate were a random walk, then any disturbance to the suicide rate would have a permanent effect and national suicide prevention efforts would likely fail. The distribution of difference scores from day to day (but not the difference scores from year to year) fitted a normal distribution and, therefore, were consistent with a random walk.
NASA Astrophysics Data System (ADS)
Deperas-Standylo, Joanna; Gudowska-Nowak, Ewa; Ritter, Sylvia
2014-07-01
Cytogenetic data accumulated from the experiments with peripheral blood lymphocytes exposed to densely ionizing radiation clearly demonstrate that for particles with linear energy transfer (LET) >100 keV/ μm the derived relative biological effectiveness (RBE) will strongly depend on the time point chosen for the analysis. A reasonable prediction of radiation-induced chromosome damage and its distribution among cells can be achieved by exploiting Monte Carlo methodology along with the information about the radius of the penetrating ion-track and the LET of the ion beam. In order to examine the relationship between the track structure and the distribution of aberrations induced in human lymphocytes and to clarify the correlation between delays in the cell cycle progression and the aberration burden visible at the first post-irradiation mitosis, we have analyzed chromosome aberrations in lymphocytes exposed to Fe-ions with LET values of 335 keV/ μm and formulated a Monte Carlo model which reflects time-delay in mitosis of aberrant cells. Within the model the frequency distributions of aberrations among cells follow the pattern of local energy distribution and are well approximated by a time-dependent compound Poisson statistics. The cell-division cycle of undamaged and aberrant cells and chromosome aberrations are modelled as a renewal process represented by a random sum of (independent and identically distributed) random elements S N = ∑ N i=0 X i . Here N stands for the number of particle traversals of cell nucleus, each leading to a statistically independent formation of X i aberrations. The parameter N is itself a random variable and reflects the cell cycle delay of heavily damaged cells. The probability distribution of S N follows a general law for which the moment generating function satisfies the relation Φ S N = Φ N ( Φ X i ). Formulation of the Monte Carlo model which allows to predict expected fluxes of aberrant and non-aberrant cells has been based on several input information: (i) experimentally measured mitotic index in the population of irradiated cells; (ii) scored fraction of cells in first cell cycle; (iii) estimated average number of particle traversals per cell nucleus. By reconstructing the local dose distribution in the biological target, the relevant amount of lesions induced by ions is estimated from the biological effect induced by photons at the same dose level. Moreover, the total amount of aberrations induced within the entire population has been determined. For each subgroup of intact (non-hit) and aberrant cells the cell-division cycle has been analyzed reproducing correctly an expected correlation between mitotic delay and the number of aberrations carried by a cell. This observation is of particular importance for the proper estimation of the biological efficiency of ions and for the estimation of health risks associated with radiation exposure.
Estimation of the Nonlinear Random Coefficient Model when Some Random Effects Are Separable
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Cudeck, Robert
2009-01-01
A method is presented for marginal maximum likelihood estimation of the nonlinear random coefficient model when the response function has some linear parameters. This is done by writing the marginal distribution of the repeated measures as a conditional distribution of the response given the nonlinear random effects. The resulting distribution…
NASA Astrophysics Data System (ADS)
Seo, Jongmin; Mani, Ali
2018-04-01
Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag reduction performance and robustness of superhydrophobic surfaces.
A numerical approximation to the elastic properties of sphere-reinforced composites
NASA Astrophysics Data System (ADS)
Segurado, J.; Llorca, J.
2002-10-01
Three-dimensional cubic unit cells containing 30 non-overlapping identical spheres randomly distributed were generated using a new, modified random sequential adsortion algorithm suitable for particle volume fractions of up to 50%. The elastic constants of the ensemble of spheres embedded in a continuous and isotropic elastic matrix were computed through the finite element analysis of the three-dimensional periodic unit cells, whose size was chosen as a compromise between the minimum size required to obtain accurate results in the statistical sense and the maximum one imposed by the computational cost. Three types of materials were studied: rigid spheres and spherical voids in an elastic matrix and a typical composite made up of glass spheres in an epoxy resin. The moduli obtained for different unit cells showed very little scatter, and the average values obtained from the analysis of four unit cells could be considered very close to the "exact" solution to the problem, in agreement with the results of Drugan and Willis (J. Mech. Phys. Solids 44 (1996) 497) referring to the size of the representative volume element for elastic composites. They were used to assess the accuracy of three classical analytical models: the Mori-Tanaka mean-field analysis, the generalized self-consistent method, and Torquato's third-order approximation.
Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution
ERIC Educational Resources Information Center
Tong, Xin; Zhang, Zhiyong
2012-01-01
Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…
Identity Construction and Reversal Conceptual Transfer among Iranian EFL Learners
ERIC Educational Resources Information Center
Gholaminejad, Razieh
2017-01-01
This article draws on a qualitative study which seeks to explore whether Iranian English as a foreign language learners experience any reversal conceptual transfer and whether they construct two identities as a result of learning a foreign language. The findings from the open-ended questionnaires distributed among 65 undergraduates at the…
Promotional Product Marketing, College Students, and Social Identity
ERIC Educational Resources Information Center
Workman, Jane E.; Freeburg, Beth Winfrey
2008-01-01
This study describes the type and nature of promotional items distributed on university campuses to students; college students typically are in a stage of life characterized by identity exploration. Among 241 students, 90% received at least one promotional item (e.g.,T-shirts, pens/pencils, magnets, calendars, water bottles); 58% received at least…
ERIC Educational Resources Information Center
Tsai, Yu-Ling; Chang, Ching-Kuch
2009-01-01
This article reports an alternative approach, called the combinatorial model, to learning multiplicative identities, and investigates the effects of implementing results for this alternative approach. Based on realistic mathematics education theory, the new instructional materials or modules of the new approach were developed by the authors. From…
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
NASA Astrophysics Data System (ADS)
Weng, Jingmeng; Wen, Weidong; Cui, Haitao; Chen, Bo
2018-06-01
A new method to generate the random distribution of fibers in the transverse cross-section of fiber reinforced composites with high fiber volume fraction is presented in this paper. Based on the microscopy observation of the transverse cross-sections of unidirectional composite laminates, hexagon arrangement is set as the initial arrangement status, and the initial velocity of each fiber is arbitrary at an arbitrary direction, the micro-scale representative volume element (RVE) is established by simulating perfectly elastic collision. Combined with the proposed periodic boundary conditions which are suitable for multi-axial loading, the effective elastic properties of composite materials can be predicted. The predicted properties show reasonable agreement with experimental results. By comparing the stress field of RVE with fibers distributed randomly and RVE with fibers distributed periodically, the predicted elastic modulus of RVE with fibers distributed randomly is greater than RVE with fibers distributed periodically.
Parental Identity and Its Relation to Parenting and Psychological Functioning in Middle Age
Fadjukoff, Päivi; Pulkkinen, Lea; Lyyra, Anna-Liisa; Kokko, Katja
2016-01-01
SYNOPSIS Objective. This article focuses on identity as a parent in relation to parenting and psychological functioning in middle age. Design. Drawn from the Jyväskylä Longitudinal Study of Personality and Social Development, 162 participants (53% females) with children (age 36), represented the Finnish age-cohort born in 1959. Parental identity was assessed at ages 36, 42, and 50. Results. In both women and men, parental identity achievement increased from age 36 to 42 and remained stable to 50. The level of parental identity achievement was higher in women than in men. Achievement was typical for women and foreclosure for men. Participants’ education, occupational status, and number of offspring were not related to parental identity status. As expected, parental identity achievement was associated with authoritative (indicated by higher nurturance and parental knowledge about the child’s activities) parenting style. No significant associations emerged between parental identity foreclosure and restrictiveness as an indicator of authoritarian parenting style. The diffused men outscored others in parental stress. Achieved parental identity was related to generativity in both genders and to higher psychological and social well-being in men. Conclusions. At present, many parenting programs are targeted to young parents. This study highlighted the importance of a later parenting phase at around age 40, when for many, the children are approaching puberty. Therefore, parenting programs and support should also be designed for middle-aged parents. Specifically men may need additional support for their active consideration and engagement in the fathering role. © Päivi Fadjukoff, Lea Pulkkinen, Anna-Liisa Lyyra, and Katja Kokko This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-No Derivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. PMID:27019651
Optimal partitioning of random programs across two processors
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereira, Wagner de S; Universidade Federal Fluminense, Programa de Pos-graduacao em Biologia Marinha; Kelecom, Alphonse
2008-08-07
The body distribution of Polonium-210 in three fishes from the Sepetiba Bay (Macrodon ancylodon, Micropogonias furnieri and Mugil curema) has been studied under the approach of the Department of Energy of the United States of America (DOE) that set the limit of absorbed dose rate in biota equal to 3.5x10{sup 3} {mu}Gy/y, and that also established the relation between dose rate (D) and radionuclide concentration (c) on a fish muscle fresh weight basis, as follows: D = 5.05 ExNxC, assuming that the radionuclide distribution is homogenous among organs. Two hypotheses were tested here, using statistical tools: 1) is the bodymore » distribution of absorbed dose homogenous among organs? and 2) is the body distribution of absorbed dose identical among studied fishes? It was concluded, as expected, that the distribution among organs is heterogeneous; but, unexpectedly, that the three fishes display identical body distribution pattern, although they belong to different trophic levels. Hence, concerning absorbed dose calculation, the statement that data distribution is homogenous must be understood merely as an approximation, at least in the case of Polonium-210.« less
Perceiving and Confronting Sexism: The Causal Role of Gender Identity Salience.
Wang, Katie; Dovidio, John F
2017-03-01
Although many researchers have explored the relations among gender identification, discriminatory attributions, and intentions to challenge discrimination, few have examined the causal impact of gender identity salience on women's actual responses to a sexist encounter. In the current study, we addressed this question by experimentally manipulating the salience of gender identity and assessing its impact on women's decision to confront a sexist comment in a simulated online interaction. Female participants ( N = 114) were randomly assigned to complete a short measure of either personal or collective self-esteem, which was designed to increase the salience of personal versus gender identity. They were then given the opportunity to confront a male interaction partner who expressed sexist views. Compared to those who were primed to focus on their personal identity, participants who were primed to focus on their gender identity perceived the interaction partner's remarks as more sexist and were more likely to engage in confrontation. By highlighting the powerful role of subtle contextual cues in shaping women's perceptions of, and responses to, sexism, our findings have important implications for the understanding of gender identity salience as an antecedent of prejudice confrontation. Online slides for instructors who want to use this article for teaching are available on PWQ's website at http://journals.sagepub.com/page/pwq/suppl/index.
Ambrus, Géza Gergely; Dotzer, Maria; Schweinberger, Stefan R; Kovács, Gyula
2017-12-01
Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a role of the right occipital face area (rOFA) in early facial feature processing. However, the degree to which rOFA is necessary for the encoding of facial identity has been less clear. Here we used a state-dependent TMS paradigm, where stimulation preferentially facilitates attributes encoded by less active neural populations, to investigate the role of the rOFA in face perception and specifically in image-independent identity processing. Participants performed a familiarity decision task for famous and unknown target faces, preceded by brief (200 ms) or longer (3500 ms) exposures to primes which were either an image of a different identity (DiffID), another image of the same identity (SameID), the same image (SameIMG), or a Fourier-randomized noise pattern (NOISE) while either the rOFA or the vertex as control was stimulated by single-pulse TMS. Strikingly, TMS to the rOFA eliminated the advantage of SameID over DiffID condition, thereby disrupting identity-specific priming, while leaving image-specific priming (better performance for SameIMG vs. SameID) unaffected. Our results suggest that the role of rOFA is not limited to low-level feature processing, and emphasize its role in image-independent facial identity processing and the formation of identity-specific memory traces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smedskjaer, Morten M., E-mail: mos@bio.aau.dk; Bauchy, Mathieu; Mauro, John C.
The properties of glass are determined not only by temperature, pressure, and composition, but also by their complete thermal and pressure histories. Here, we show that glasses of identical composition produced through thermal annealing and through quenching from elevated pressure can result in samples with identical density and mean interatomic distances, yet different bond angle distributions, medium-range structures, and, thus, macroscopic properties. We demonstrate that hardness is higher when the density increase is obtained through thermal annealing rather than through pressure-quenching. Molecular dynamics simulations reveal that this arises because pressure-quenching has a larger effect on medium-range order, while annealing hasmore » a larger effect on short-range structures (sharper bond angle distribution), which ultimately determine hardness according to bond constraint theory. Our work could open a new avenue towards industrially useful glasses that are identical in terms of composition and density, but with differences in thermodynamic, mechanical, and rheological properties due to unique structural characteristics.« less
Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard
2010-03-31
Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.
Isonymy structure of Sucre and Táchira, two Venezuelan states.
Rodríguez-Larralde, A; Barrai, I
1997-10-01
The isonymy structure of two Venezuelan states, Sucre and Táchira, is described using the surnames of the Register of Electors updated in 1991. The frequency distribution of surnames pooled together by sex was obtained for the 57 counties of Sucre and the 52 counties of Táchira, based on total population sizes of 158,705 and 160,690 individuals, respectively. The coefficient of consanguinity resulting from random isonymy (phi ii), Karlin and McGregor's ni (identical to v), and the proportion of the population included in surnames represented only once (estimator A) and in the seven most frequent surnames (estimator B) were calculated for each county. RST, a measure of microdifferentiation, was estimated for each state. The Euclidean distance between pairs of counties within states was calculated together with the corresponding geographic distances. The correlations between their logarithmic transformations were significant in both cases, indicating differentiation of surnames by distance. Dendrograms based on the Euclidean distance matrix were constructed. From them a first approximation of the effect of internal migration within states was obtained. Ninety-six percent of the coefficient of consanguinity resulting from random isonymy is determined by the proportion of the population included in the seven most frequent surnames, whereas between 72% and 88% of Karlin and McGregor's ni for Sucre and Táchira, respectively, is determined by the proportion of population included in surnames represented only once. Surnames with generalized and with focal distribution were identified for both states, to be used as possible indicators of the geographic origin of their carriers. Our results indicate that Táchira's counties, on average, tend to be more isolated than Sucre's counties, as measured by RST, estimator B, and phi ii. Comparisons with the results obtained for other. Venezuelan states and other non-Venezuelan populations are also given.
NASA Astrophysics Data System (ADS)
Zhang, H.; Harter, T.; Sivakumar, B.
2005-12-01
Facies-based geostatistical models have become important tools for the stochastic analysis of flow and transport processes in heterogeneous aquifers. However, little is known about the dependency of these processes on the parameters of facies- based geostatistical models. This study examines the nonpoint source solute transport normal to the major bedding plane in the presence of interconnected high conductivity (coarse- textured) facies in the aquifer medium and the dependence of the transport behavior upon the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute travel time probability distribution functions (pdfs) for solute flux from the water table to the bottom boundary (production horizon) of the aquifer. The cases examined include, two-, three-, and four-facies models with horizontal to vertical facies mean length anisotropy ratios, ek, from 25:1 to 300:1, and with a wide range of facies volume proportions (e.g, from 5% to 95% coarse textured facies). Predictions of travel time pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer, the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and - to a lesser degree - the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, travel time pdfs are not log- normally distributed as is often assumed. Also, macrodispersive behavior (variance of the travel time pdf) was found to not be a unique function of the conductivity variance. The skewness of the travel time pdf varied from negatively skewed to strongly positively skewed within the parameter range examined. We also show that the Markov chain approach may give significantly different travel time pdfs when compared to the more commonly used Gaussian random field approach even though the first and second order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Harter, Thomas; Sivakumar, Bellie
2006-06-01
Facies-based geostatistical models have become important tools for analyzing flow and mass transport processes in heterogeneous aquifers. Yet little is known about the relationship between these latter processes and the parameters of facies-based geostatistical models. In this study, we examine the transport of a nonpoint source solute normal (perpendicular) to the major bedding plane of an alluvial aquifer medium that contains multiple geologic facies, including interconnected, high-conductivity (coarse textured) facies. We also evaluate the dependence of the transport behavior on the parameters of the constitutive facies model. A facies-based Markov chain geostatistical model is used to quantify the spatial variability of the aquifer system's hydrostratigraphy. It is integrated with a groundwater flow model and a random walk particle transport model to estimate the solute traveltime probability density function (pdf) for solute flux from the water table to the bottom boundary (the production horizon) of the aquifer. The cases examined include two-, three-, and four-facies models, with mean length anisotropy ratios for horizontal to vertical facies, ek, from 25:1 to 300:1 and with a wide range of facies volume proportions (e.g., from 5 to 95% coarse-textured facies). Predictions of traveltime pdfs are found to be significantly affected by the number of hydrostratigraphic facies identified in the aquifer. Those predictions of traveltime pdfs also are affected by the proportions of coarse-textured sediments, the mean length of the facies (particularly the ratio of length to thickness of coarse materials), and, to a lesser degree, the juxtapositional preference among the hydrostratigraphic facies. In transport normal to the sedimentary bedding plane, traveltime is not lognormally distributed as is often assumed. Also, macrodispersive behavior (variance of the traveltime) is found not to be a unique function of the conductivity variance. For the parameter range examined, the third moment of the traveltime pdf varies from negatively skewed to strongly positively skewed. We also show that the Markov chain approach may give significantly different traveltime distributions when compared to the more commonly used Gaussian random field approach, even when the first- and second-order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport, and uncertainty about that choice must be considered in evaluating the results.
Beelen, D W; Elmaagacli, A; Müller, K D; Hirche, H; Schaefer, U W
1999-05-15
In a single-center open-label prospective study, a total of 134 marrow transplant recipients with hematologic malignancies were randomly assigned to a bacterial decontamination medication using metronidazole and ciprofloxacin (n = 68) or ciprofloxacin alone (n = 66) during 5 weeks posttransplant. The development of grades II to IV acute graft-versus-host disease (GVHD) was defined as the primary study endpoint. According to the intention-to-treat, 17 patients (25%) randomized to the combined decontamination medication and 33 patients (50%) randomized to ciprofloxacin alone developed grades II to IV GVHD (P <.002). The higher frequency of grades II to IV acute GVHD in patients randomized to ciprofloxacin alone resulted from a more than twofold increased number of patients developing liver or intestinal involvement with acute GVHD compared with patients randomized to the combined decontamination medication (P <.003). The influence of the study medication on grades II to IV acute GVHD was significant only in recipients of transplants from genotypically HLA-identical sibling donors (n = 80), whereas in recipients of transplants from donors other than HLA-identical siblings (n = 54), grades II to IV acute GVHD frequencies between the study arms were not significantly different. The combined decontamination was associated with a significant reduction of culture growth of intestinal anaerobic bacteria during 5 weeks posttransplant (P <. 00001). In addition, the number of cultures with growth of anaerobic bacteria (P <.005) as well as the median concentrations of anaerobic bacteria in the posttransplant period (P <.0001) were higher in patients contracting grades II to IV acute GVHD. Neither chronic GVHD nor overall survival was significantly different between the two study arms. In patients with HLA-identical sibling donors who were treated in early disease stages, the 5-year survival estimate was slightly, but not significant, higher after the combined decontamination medication (60% +/- 11%) compared with ciprofloxacin alone (46% +/- 9%). In conclusion, the present study provides evidence that antimicrobial chemotherapy targeted to intestinal anaerobic bacteria in marrow transplant recipients significantly reduces the severity of acute GVHD and supports the theory that the intestinal anaerobic bacterial microflora plays a role in the pathogenesis of acute GVHD after human marrow transplantation.
ERIC Educational Resources Information Center
Keddie, Amanda
2015-01-01
This article explores the politically contentious issue of White working-class student under-achievement within one particular school--a large and culturally diverse comprehensive secondary school in the greater London area. The article examines the equity philosophies and identity politics articulated by staff in their understanding of and…
Hsueh, P R; Teng, L J; Yang, P C; Chen, Y C; Ho, S W; Luh, K T
1998-05-01
We describe herein a recurrent catheter-related (Port-A-Cath; Smiths Industries Medical Systems [SIMS] Deltec, Inc., St. Paul, Minn.) infection caused by multidrug-resistant Mycobacterium chelonae with two colonial morphotypes in a 53-year-old woman with gastric adenocarcinoma. Four isolates recovered from this patient within a 3-month period were found to belong to a single clone on the basis of the isolates' identical antibiotypes as determined by the E test and their identical random amplified polymorphic DNA patterns.
Pattern formation and collective effects in populations of magnetic microswimmers
NASA Astrophysics Data System (ADS)
Vach, Peter J.; Walker, Debora; Fischer, Peer; Fratzl, Peter; Faivre, Damien
2017-03-01
Self-propelled particles are one prototype of synthetic active matter used to understand complex biological processes, such as the coordination of movement in bacterial colonies or schools of fishes. Collective patterns such as clusters were observed for such systems, reproducing features of biological organization. However, one limitation of this model is that the synthetic assemblies are made of identical individuals. Here we introduce an active system based on magnetic particles at colloidal scales. We use identical but also randomly-shaped magnetic micropropellers and show that they exhibit dynamic and reversible pattern formation.
Neonatal nurse practitioners: identity as advanced practice nurses.
Beal, J A; Maguire, D; Carr, R
1996-06-01
To define how neonatal nurse practitioners (NNPs) perceive their identity as advanced practice nurses. Non-experimental descriptive and correlational survey. Nationwide random sample drawn from NNPs certified by the National Certification Corporation. Two hundred fifty-eight neonatal nurse practitioners practicing in neonatal intensive-care units across the United States. Neonatal Nurse Practitioners indicated on a visual analogue scale at which point their philosophy of practice fell on a continuum from nursing to medicine and specified on a 5-point bipolar Likert scale how various role socialization factors influenced their identity. The NNPs predominantly were certificate-prepared and aligned themselves with a medical philosophy. Those NNPs who were master's-prepared (p < .01), precepted by another NNP (p < .05), espoused a philosophy of nursing (p < .001), belonged to a professional nursing organization (p < .05), and had an NNP role model (p < .001) were more likely to have a strong nursing identity (95% confidence interval). The issues of role differentiation, socialization, and identity of advanced practice nurses in tertiary care need further exploration. These data support the American Nurses' Association mandate of graduate nursing education for advanced nurse practitioners.
Biot, Eric; Adenot, Pierre-Gaël; Hue-Beauvais, Cathy; Houba-Hérin, Nicole; Duranthon, Véronique; Devinoy, Eve; Beaujean, Nathalie; Gaudin, Valérie; Maurin, Yves; Debey, Pascale
2010-01-01
In eukaryotes, the interphase nucleus is organized in morphologically and/or functionally distinct nuclear “compartments”. Numerous studies highlight functional relationships between the spatial organization of the nucleus and gene regulation. This raises the question of whether nuclear organization principles exist and, if so, whether they are identical in the animal and plant kingdoms. We addressed this issue through the investigation of the three-dimensional distribution of the centromeres and chromocenters. We investigated five very diverse populations of interphase nuclei at different differentiation stages in their physiological environment, belonging to rabbit embryos at the 8-cell and blastocyst stages, differentiated rabbit mammary epithelial cells during lactation, and differentiated cells of Arabidopsis thaliana plantlets. We developed new tools based on the processing of confocal images and a new statistical approach based on G- and F- distance functions used in spatial statistics. Our original computational scheme takes into account both size and shape variability by comparing, for each nucleus, the observed distribution against a reference distribution estimated by Monte-Carlo sampling over the same nucleus. This implicit normalization allowed similar data processing and extraction of rules in the five differentiated nuclei populations of the three studied biological systems, despite differences in chromosome number, genome organization and heterochromatin content. We showed that centromeres/chromocenters form significantly more regularly spaced patterns than expected under a completely random situation, suggesting that repulsive constraints or spatial inhomogeneities underlay the spatial organization of heterochromatic compartments. The proposed technique should be useful for identifying further spatial features in a wide range of cell types. PMID:20628576
Outlier Responses Reflect Sensitivity to Statistical Structure in the Human Brain
Garrido, Marta I.
2013-01-01
We constantly look for patterns in the environment that allow us to learn its key regularities. These regularities are fundamental in enabling us to make predictions about what is likely to happen next. The physiological study of regularity extraction has focused primarily on repetitive sequence-based rules within the sensory environment, or on stimulus-outcome associations in the context of reward-based decision-making. Here we ask whether we implicitly encode non-sequential stochastic regularities, and detect violations therein. We addressed this question using a novel experimental design and both behavioural and magnetoencephalographic (MEG) metrics associated with responses to pure-tone sounds with frequencies sampled from a Gaussian distribution. We observed that sounds in the tail of the distribution evoked a larger response than those that fell at the centre. This response resembled the mismatch negativity (MMN) evoked by surprising or unlikely events in traditional oddball paradigms. Crucially, responses to physically identical outliers were greater when the distribution was narrower. These results show that humans implicitly keep track of the uncertainty induced by apparently random distributions of sensory events. Source reconstruction suggested that the statistical-context-sensitive responses arose in a temporo-parietal network, areas that have been associated with attention orientation to unexpected events. Our results demonstrate a very early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. We suggest that this sensitivity provides a computational basis for our ability to make perceptual inferences in noisy environments and to make decisions in an uncertain world. PMID:23555230
Optical detection of random features for high security applications
NASA Astrophysics Data System (ADS)
Haist, T.; Tiziani, H. J.
1998-02-01
Optical detection of random features in combination with digital signatures based on public key codes in order to recognize counterfeit objects will be discussed. Without applying expensive production techniques objects are protected against counterfeiting. Verification is done off-line by optical means without a central authority. The method is applied for protecting banknotes. Experimental results for this application are presented. The method is also applicable for identity verification of a credit- or chip-card holder.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
Randomness versus specifics for word-frequency distributions
NASA Astrophysics Data System (ADS)
Yan, Xiaoyong; Minnhagen, Petter
2016-02-01
The text-length-dependence of real word-frequency distributions can be connected to the general properties of a random book. It is pointed out that this finding has strong implications, when deciding between two conceptually different views on word-frequency distributions, i.e. the specific 'Zipf's-view' and the non-specific 'Randomness-view', as is discussed. It is also noticed that the text-length transformation of a random book does have an exact scaling property precisely for the power-law index γ = 1, as opposed to the Zipf's exponent γ = 2 and the implication of this exact scaling property is discussed. However a real text has γ > 1 and as a consequence γ increases when shortening a real text. The connections to the predictions from the RGF (Random Group Formation) and to the infinite length-limit of a meta-book are also discussed. The difference between 'curve-fitting' and 'predicting' word-frequency distributions is stressed. It is pointed out that the question of randomness versus specifics for the distribution of outcomes in case of sufficiently complex systems has a much wider relevance than just the word-frequency example analyzed in the present work.
Randomness determines practical security of BB84 quantum key distribution.
Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu
2015-11-10
Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.
Randomness determines practical security of BB84 quantum key distribution
Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu
2015-01-01
Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system. PMID:26552359
Randomness determines practical security of BB84 quantum key distribution
NASA Astrophysics Data System (ADS)
Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu
2015-11-01
Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.
Calderone, G.J.; Butler, R.F.
1991-01-01
Random tilting of a single paleomagnetic vector produces a distribution of vectors which is not rotationally symmetric about the original vector and therefore not Fisherian. Monte Carlo simulations were performed on two types of vector distributions: 1) distributions of vectors formed by perturbing a single original vector with a Fisher distribution of bedding poles (each defining a tilt correction) and 2) standard Fisher distributions. These simulations demonstrate that inclinations of vectors drawn from both distributions are biased toward shallow inclinations. The Fisher mean direction of the distribution of vectors formed by perturbing a single vector with random undetected tilts is biased toward shallow inclinations, but this bias is insignificant for angular dispersions of bedding poles less than 20??. -from Authors
SuperIdentity: Fusion of Identity across Real and Cyber Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, Sue; Creese, Sadie; Guest, Richard
Under both benign and malign circumstances, people now manage a spectrum of identities across both real-world and cyber domains. Our belief, however, is that all these instances ultimately track back for an individual to reflect a single 'SuperIdentity'. This paper outlines the assumptions underpinning the SuperIdentity Project, describing the innovative use of data fusion to incorporate novel real-world and cyber cues into a rich framework appropriate for modern identity. The proposed combinatorial model will support a robust identification or authentication decision, with confidence indexed both by the level of trust in data provenance, and the diagnosticity of the identity factorsmore » being used. Additionally, the exploration of correlations between factors may underpin the more intelligent use of identity information so that known information may be used to predict previously hidden information. With modern living supporting the 'distribution of identity' across real and cyber domains, and with criminal elements operating in increasingly sophisticated ways in the hinterland between the two, this approach is suggested as a way forwards, and is discussed in terms of its impact on privacy, security, and the detection of threat.« less
Human milk is a source of lactic acid bacteria for the infant gut.
Martín, Rocío; Langa, Susana; Reviriego, Carlota; Jimínez, Esther; Marín, María L; Xaus, Jordi; Fernández, Leonides; Rodríguez, Juan M
2003-12-01
To investigate whether human breast milk contains potentially probiotic lactic acid bacteria, and therefore, whether it can be considered a synbiotic food. Study design Lactic acid bacteria were isolated from milk, mammary areola, and breast skin of eight healthy mothers and oral swabs and feces of their respective breast-fed infants. Some isolates (178 from each mother and newborn pair) were randomly selected and submitted to randomly amplified polymorphic DNA (RAPD) polymerase chain reaction analysis, and those that displayed identical RAPD patterns were identified by 16S rDNA sequencing. Within each mother and newborn pair, some rod-shaped lactic acid bacteria isolated from mammary areola, breast milk, and infant oral swabs and feces displayed identical RAPD profiles. All of them, independently from the mother and child pair, were identified as Lactobacillus gasseri. Similarly, among coccoid lactic acid bacteria from these different sources, some shared an identical RAPD pattern and were identified as Enterococcus faecium. In contrast, none of the lactic acid bacteria isolated from breast skin shared RAPD profiles with lactic acid bacteria of the other sources. Breast-feeding can be a significant source of lactic acid bacteria to the infant gut. Lactic acid bacteria present in milk may have an endogenous origin and may not be the result of contamination from the surrounding breast skin.
High-density, microsphere-based fiber optic DNA microarrays.
Epstein, Jason R; Leung, Amy P K; Lee, Kyong Hoon; Walt, David R
2003-05-01
A high-density fiber optic DNA microarray has been developed consisting of oligonucleotide-functionalized, 3.1-microm-diameter microspheres randomly distributed on the etched face of an imaging fiber bundle. The fiber bundles are comprised of 6000-50000 fused optical fibers and each fiber terminates with an etched well. The microwell array is capable of housing complementary-sized microspheres, each containing thousands of copies of a unique oligonucleotide probe sequence. The array fabrication process results in random microsphere placement. Determining the position of microspheres in the random array requires an optical encoding scheme. This array platform provides many advantages over other array formats. The microsphere-stock suspension concentration added to the etched fiber can be controlled to provide inherent sensor redundancy. Examining identical microspheres has a beneficial effect on the signal-to-noise ratio. As other sequences of interest are discovered, new microsphere sensing elements can be added to existing microsphere pools and new arrays can be fabricated incorporating the new sequences without altering the existing detection capabilities. These microarrays contain the smallest feature sizes (3 microm) of any DNA array, allowing interrogation of extremely small sample volumes. Reducing the feature size results in higher local target molecule concentrations, creating rapid and highly sensitive assays. The microsphere array platform is also flexible in its applications; research has included DNA-protein interaction profiles, microbial strain differentiation, and non-labeled target interrogation with molecular beacons. Fiber optic microsphere-based DNA microarrays have a simple fabrication protocol enabling their expansion into other applications, such as single cell-based assays.
Methanol and ethanol conversion into hydrocarbons over H-ZSM-5 catalyst
NASA Astrophysics Data System (ADS)
Hamieh, S.; Canaff, C.; Tayeb, K. Ben; Tarighi, M.; Maury, S.; Vezin, H.; Pouilloux, Y.; Pinard, L.
2015-07-01
Ethanol and methanol are converted using H-ZSM-5 zeolite at 623 K and 3.0 MPa into identical hydrocarbons (paraffins, olefins and aromatics) and moreover with identical selectivities. The distribution of olefins as paraffins follows the Flory distribution with a growth probability of 0.53. Regardless of the alcohol, the catalyst lifetime and selectivity into hydrocarbons C3+ are high in spite of an important coke content. The coke that poisons the Brønsted acid sites without blocking their access is composed in part of radical polyalkylaromatics. The addition of hydroquinone, an inhibitor of radicals, to the feed, provokes an immediate catalyst deactivation.
NASA Astrophysics Data System (ADS)
Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi
2002-07-01
We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.
The invariant statistical rule of aerosol scattering pulse signal modulated by random noise
NASA Astrophysics Data System (ADS)
Yan, Zhen-gang; Bian, Bao-Min; Yang, Juan; Peng, Gang; Li, Zhen-hua
2010-11-01
A model of the random background noise acting on particle signals is established to study the impact of the background noise of the photoelectric sensor in the laser airborne particle counter on the statistical character of the aerosol scattering pulse signals. The results show that the noises broaden the statistical distribution of the particle's measurement. Further numerical research shows that the output of the signal amplitude still has the same distribution when the airborne particle with the lognormal distribution was modulated by random noise which has lognormal distribution. Namely it follows the statistics law of invariance. Based on this model, the background noise of photoelectric sensor and the counting distributions of random signal for aerosol's scattering pulse are obtained and analyzed by using a high-speed data acquisition card PCI-9812. It is found that the experiment results and simulation results are well consistent.
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of lognormal distributions having different variances, may generate a DPLN distribution.
Link, W.A.; Sauer, J.R.; Niven, D.K.
2006-01-01
Analysis of Christmas Bird Count (CBC) data is complicated by the need to account for variation in effort on counts and to provide summaries over large geographic regions. We describe a hierarchical model for analysis of population change using CBC data that addresses these needs. The effect of effort is modeled parametrically, with parameter values varying among strata as identically distributed random effects. Year and site effects are modeled hierarchically, accommodating large regional variation in number of samples and precision of estimates. The resulting model is complex, but a Bayesian analysis can be conducted using Markov chain Monte Carlo techniques. We analyze CBC data for American Black Ducks (Anas rubripes), a species of considerable management interest that has historically been monitored using winter surveys. Over the interval 1966-2003, Black Duck populations showed distinct regional patterns of population change. The patterns shown by CBC data are similar to those shown by the Midwinter Waterfowl Inventory for the United States.
Fourier-based automatic alignment for improved Visual Cryptography schemes.
Machizaud, Jacques; Chavel, Pierre; Fournel, Thierry
2011-11-07
In Visual Cryptography, several images, called "shadow images", that separately contain no information, are overlapped to reveal a shared secret message. We develop a method to digitally register one printed shadow image acquired by a camera with a purely digital shadow image, stored in memory. Using Fourier techniques derived from Fourier Optics concepts, the idea is to enhance and exploit the quasi periodicity of the shadow images, composed by a random distribution of black and white patterns on a periodic sampling grid. The advantage is to speed up the security control or the access time to the message, in particular in the cases of a small pixel size or of large numbers of pixels. Furthermore, the interest of visual cryptography can be increased by embedding the initial message in two shadow images that do not have identical mathematical supports, making manual registration impractical. Experimental results demonstrate the successful operation of the method, including the possibility to directly project the result onto the printed shadow image.
Screening actuator locations for static shape control
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1990-01-01
Correction of shape distortion due to zero-mean normally distributed errors in structural sizes which are random variables is examined. A bound on the maximum improvement in the expected value of the root-mean-square shape error is obtained. The shape correction associated with the optimal actuators is also characterized. An actuator effectiveness index is developed and shown to be helpful in screening actuator locations in the structure. The results are specialized to a simple form for truss structures composed of nominally identical members. The bound and effectiveness index are tested on a 55-m radiometer antenna truss structure. It is found that previously obtained results for optimum actuators had a performance close to the bound obtained here. Furthermore, the actuators associated with the optimum design are shown to have high effectiveness indices. Since only a small fraction of truss elements tend to have high effectiveness indices, the proposed screening procedure can greatly reduce the number of truss members that need to be considered as actuator sites.
Evaluation of a pulse control law for flexible spacecraft
NASA Technical Reports Server (NTRS)
1985-01-01
The following analytical and experimental studies were conducted: (1) A simple algorithm was developed to suppress the structural vibrations of 3-dimensional distributed parameter systems, subjected to interface motion and/or directly applied forces. The algorithm is designed to cope with structural oscillations superposed on top of rigid-body motion: a situation identical to that encountered by the SCOLE components. A significant feature of the method is that only local measurements of the structural displacements and velocities relative to the moving frame of reference are needed. (2) A numerical simulation study was conducted on a simple linear finite element model of a cantilevered plate which was subjected to test excitations consisting of impulsive base motion and of nonstationary wide-band random excitation applied at its root. In each situation, the aim was to suppress the vibrations of the plate relative to the moving base. (3) A small mechanical model resembling an aircraft wing was designed and fabricated to investigate the control algorithm under realistic laboratory conditions.
Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng
2018-02-16
Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost-the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.
NASA Astrophysics Data System (ADS)
Zhao, Qi; Liu, Yunchao; Yuan, Xiao; Chitambar, Eric; Ma, Xiongfeng
2018-02-01
Manipulation and quantification of quantum resources are fundamental problems in quantum physics. In the asymptotic limit, coherence distillation and dilution have been proposed by manipulating infinite identical copies of states. In the nonasymptotic setting, finite data-size effects emerge, and the practically relevant problem of coherence manipulation using finite resources has been left open. This Letter establishes the one-shot theory of coherence dilution, which involves converting maximally coherent states into an arbitrary quantum state using maximally incoherent operations, dephasing-covariant incoherent operations, incoherent operations, or strictly incoherent operations. We introduce several coherence monotones with concrete operational interpretations that estimate the one-shot coherence cost—the minimum amount of maximally coherent states needed for faithful coherence dilution. Furthermore, we derive the asymptotic coherence dilution results with maximally incoherent operations, incoherent operations, and strictly incoherent operations as special cases. Our result can be applied in the analyses of quantum information processing tasks that exploit coherence as resources, such as quantum key distribution and random number generation.
Random-Walk Type Model with Fat Tails for Financial Markets
NASA Astrophysics Data System (ADS)
Matuttis, Hans-Geors
Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.
Understanding American Identity: An Introduction
2017-12-01
and equality in American society is akin to a magnetic field; this force will either bring people together or force them apart. Jonathan Haidt...Hegemony: Problems and Possibilities,” The American Historical Review, 1985, 567–593. 55 3. Obstacle # 3: Subgroup Suppression When societies ... American identity narratives in U.S. history and contemporary society . The historic disagreement over the distribution of the fundamental American
ERIC Educational Resources Information Center
Varzande, Mohsen
2015-01-01
Today, English education is very important but language learning has long been challenged since learning a second language is not only the mastery of its forms but also a process of identity construction and self-positioning in the second language. A review of recent studies shows that the cultural effects of learning English in the…
Moghimi, Saba; Schudlo, Larissa; Chau, Tom; Guerguerian, Anne-Marie
2015-01-01
Music-induced brain activity modulations in areas involved in emotion regulation may be useful in achieving therapeutic outcomes. Clinical applications of music may involve prolonged or repeated exposures to music. However, the variability of the observed brain activity patterns in repeated exposures to music is not well understood. We hypothesized that multiple exposures to the same music would elicit more consistent activity patterns than exposure to different music. In this study, the temporal and spatial variability of cerebral prefrontal hemodynamic response was investigated across multiple exposures to self-selected musical excerpts in 10 healthy adults. The hemodynamic changes were measured using prefrontal cortex near infrared spectroscopy and represented by instantaneous phase values. Based on spatial and temporal characteristics of these observed hemodynamic changes, we defined a consistency index to represent variability across these domains. The consistency index across repeated exposures to the same piece of music was compared to the consistency index corresponding to prefrontal activity from randomly matched non-identical musical excerpts. Consistency indexes were significantly different for identical versus non-identical musical excerpts when comparing a subset of repetitions. When all four exposures were compared, no significant difference was observed between the consistency indexes of randomly matched non-identical musical excerpts and the consistency index corresponding to repetitions of the same musical excerpts. This observation suggests the existence of only partial consistency between repeated exposures to the same musical excerpt, which may stem from the role of the prefrontal cortex in regulating other cognitive and emotional processes.
Moghimi, Saba; Schudlo, Larissa; Chau, Tom; Guerguerian, Anne-Marie
2015-01-01
Music-induced brain activity modulations in areas involved in emotion regulation may be useful in achieving therapeutic outcomes. Clinical applications of music may involve prolonged or repeated exposures to music. However, the variability of the observed brain activity patterns in repeated exposures to music is not well understood. We hypothesized that multiple exposures to the same music would elicit more consistent activity patterns than exposure to different music. In this study, the temporal and spatial variability of cerebral prefrontal hemodynamic response was investigated across multiple exposures to self-selected musical excerpts in 10 healthy adults. The hemodynamic changes were measured using prefrontal cortex near infrared spectroscopy and represented by instantaneous phase values. Based on spatial and temporal characteristics of these observed hemodynamic changes, we defined a consistency index to represent variability across these domains. The consistency index across repeated exposures to the same piece of music was compared to the consistency index corresponding to prefrontal activity from randomly matched non-identical musical excerpts. Consistency indexes were significantly different for identical versus non-identical musical excerpts when comparing a subset of repetitions. When all four exposures were compared, no significant difference was observed between the consistency indexes of randomly matched non-identical musical excerpts and the consistency index corresponding to repetitions of the same musical excerpts. This observation suggests the existence of only partial consistency between repeated exposures to the same musical excerpt, which may stem from the role of the prefrontal cortex in regulating other cognitive and emotional processes. PMID:25837268
Nordhall, Ola; Knez, Igor
2018-01-01
The aim of this study was to investigate the role of personal and collective work identity (including emotion and cognition components), in predicting work motivation (operationalized as work self-determined motivation) and organizational justice (operationalized as organizational pay justice). Digitized questionnaires were distributed by e-mail to 2905 members, teachers, of a Swedish trade union. A total of 768 individuals answered the questionnaire and by that participated in this study. Personal- compared to collective work identity was shown to positively associate with self-determined motivation accounted for by the emotion component of personal work identity. Collective compared to personal work identity was reported to positively associate with organizational pay justice accounted for by the cognition component of collective work identity. All this suggests that both work-related motivation and organizational justice might be, to some extent, accounted for by the psychological mechanisms of work identity and that, as predicted, different types of work identity, play different significant roles in predicting motivation and justice at work. More precisely, the emotion component of work identity was more pronounced in personal work-bonding relationships, and the cognitive component, of work identity in contrast, was more pronounced in collective work-bonding relationships. PMID:29379454
Nordhall, Ola; Knez, Igor
2017-01-01
The aim of this study was to investigate the role of personal and collective work identity (including emotion and cognition components), in predicting work motivation (operationalized as work self-determined motivation) and organizational justice (operationalized as organizational pay justice). Digitized questionnaires were distributed by e-mail to 2905 members, teachers, of a Swedish trade union. A total of 768 individuals answered the questionnaire and by that participated in this study. Personal- compared to collective work identity was shown to positively associate with self-determined motivation accounted for by the emotion component of personal work identity. Collective compared to personal work identity was reported to positively associate with organizational pay justice accounted for by the cognition component of collective work identity. All this suggests that both work-related motivation and organizational justice might be, to some extent, accounted for by the psychological mechanisms of work identity and that, as predicted, different types of work identity, play different significant roles in predicting motivation and justice at work. More precisely, the emotion component of work identity was more pronounced in personal work-bonding relationships, and the cognitive component, of work identity in contrast, was more pronounced in collective work-bonding relationships.
Joyce, Carmel; Stevenson, Clifford; Muldoon, Orla
2013-09-01
Two complementary explanations have been offered by social psychologists to account for the universal hold of national identity, first that national identity is ideologically assumed, as it forms the 'banal' background of everyday life, and second that national identity is 'hotly' constructed and contested in political and everyday settings to great effect. However, 'banal' and 'hot' aspects of national identity have been found to be distributed unevenly across national and subnational groups and banality itself can be strategically used to distinguish between different groups. The present paper develops these ideas by examining possible reasons for these different modes and strategies of identity expression. Drawing upon intergroup theories of minority and majority relations, we examine how a group who see themselves unequivocally as a minority, Irish Travellers, talk about their national identity in comparison to an age and gender-matched sample of Irish students. We find that Travellers proactively display and claim 'hot' national identity in order to establish their Irishness. Irish students 'do banality', police the boundaries and reputation of Irishness, and actively reject and disparage proactive displays of Irishness. The implications for discursive understandings of identity, the study of intra-national group relations and policies of minority inclusion are discussed. © 2012 The British Psychological Society.
Flow Phenomena in the Very Near Wake of a Flat Plate with a Circular Trailing Edge
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2014-01-01
The very near wake of a flat plate with a circular trailing edge, exhibiting pronounced shedding of wake vortices, is investigated with data from a direct numerical simulation. The separating boundary layers are turbulent and statistically identical thus resulting in a wake that is symmetric in the mean. The focus here is on the instability of the detached shear layers, the evolution of rib-vortex induced localized regions of reverse flow that detach from the main body of reverse flow in the trailing edge region and convect downstream, and phaseaveraged velocity statistics in the very near wake. The detached shear layers are found to exhibit unstable behavior intermittently, including the development of shear layer vortices as in earlier cylinder flow investigations with laminar separating boundary layers. Only a small fraction of the separated turbulent boundary layers undergo this instability, and form the initial shed vortices. Pressure spectra within the shear layers show a broadband peak at a multiple of shedding frequency. Phase-averaged intensity and shear stress distributions of the randomly fluctuating component of velocity are compared with those obtained in the near wake. The distributions of the production terms in the transport equations for the turbulent stresses are also provided.
Distribution of shortest cycle lengths in random networks
NASA Astrophysics Data System (ADS)
Bonneau, Haggai; Hassid, Aviv; Biham, Ofer; Kühn, Reimer; Katzav, Eytan
2017-12-01
We present analytical results for the distribution of shortest cycle lengths (DSCL) in random networks. The approach is based on the relation between the DSCL and the distribution of shortest path lengths (DSPL). We apply this approach to configuration model networks, for which analytical results for the DSPL were obtained before. We first calculate the fraction of nodes in the network which reside on at least one cycle. Conditioning on being on a cycle, we provide the DSCL over ensembles of configuration model networks with degree distributions which follow a Poisson distribution (Erdős-Rényi network), degenerate distribution (random regular graph), and a power-law distribution (scale-free network). The mean and variance of the DSCL are calculated. The analytical results are found to be in very good agreement with the results of computer simulations.
Martin, F; Reinbolt, J; Dirheimer, G; Gangloff, J; Eriani, G
1996-01-01
Elements that confer identity to a tRNA in the cellular environment, where all aminoacyl-tRNA synthetases are competing for substrates, may be delineated by in vivo experiments using suppressor tRNAs. Here we describe the selection of active Escherichia coli tRNAAsp amber mutants and analyze their identity. Starting from a library containing randomly mutated tRNA(CUA)Asp genes, we isolated four amber suppressors presenting either lysine, alanine, or glutamine activity. Two of them, presenting mainly alanine or lysine activity, were further submitted to a second round of mutagenesis selection in order to improve their efficiency of suppression. Eleven suppressors were isolated, each containing two or three mutations. Ten presented identities of the two parental mutants, whereas one had switched from lysine to arginine identity. Analysis of the different mutants revealed (or confirmed for some nucleotides) their role as positive and/or negative determinants in AlaRS, LysRS, and ArgRS recognition. More generally, it appears that tRNAAsp presents identity characteristics closely related to those of tRNALys, as well as a structural basis for acquiring alanine or arginine identity upon moderate mutational changes; these consist of addition or suppression of the corresponding positive or negative determinants, as well as tertiary interactions. Failure to isolate aspartic acid-inserting suppressors is probably due to elimination of the important G34 identity element and its replacement by an antideterminant when changing the anticodon of the tRNAAsp to the CUA triplet. PMID:8809018
Measurement of sexual identity in surveys: implications for substance abuse research.
McCabe, Sean Esteban; Hughes, Tonda L; Bostwick, Wendy; Morales, Michele; Boyd, Carol J
2012-06-01
Researchers are increasingly recognizing the need to include measures of sexual orientation in health studies. However, relatively little attention has been paid to how sexual identity, the cognitive aspect of sexual orientation, is defined and measured. Our study examined the impact of using two separate sexual identity question formats: a three-category question (response options included heterosexual, bisexual, or lesbian/gay), and a similar question with five response options (only lesbian/gay, mostly lesbian/gay, bisexual, mostly heterosexual, only heterosexual). A large probability-based sample of undergraduate university students was surveyed and a randomly selected subsample of participants was asked both sexual identity questions. Approximately one-third of students who identified as bisexual based on the three-category sexual identity measure chose "mostly heterosexual" or "mostly lesbian/gay" on the five-category measure. In addition to comparing sample proportions of lesbian/gay, bisexual, or heterosexual participants based on the two question formats, rates of alcohol and other drug use were also examined among the participants. Substance use outcomes among the sexual minority subgroups differed based on the sexual identity question format used: bisexual participants showed greater risk of substance use in analyses using the three-category measure whereas "mostly heterosexual" participants were at greater risk when data were analyzed using the five-category measure. Study results have important implications for the study of sexual identity, as well as whether and how to recode responses to questions related to sexual identity.
Reliability generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R).
Herrington, Hayley M; Smith, Timothy B; Feinauer, Erika; Griner, Derek
2016-10-01
[Correction Notice: An Erratum for this article was reported in Vol 63(5) of Journal of Counseling Psychology (see record 2016-33161-001). The name of author Erika Feinauer was misspelled as Erika Feinhauer. All versions of this article have been corrected.] Individuals' strength of ethnic identity has been linked with multiple positive indicators, including academic achievement and overall psychological well-being. The measure researchers use most often to assess ethnic identity, the Multigroup Ethnic Identity Measure (MEIM), underwent substantial revision in 2007. To inform scholars investigating ethnic identity, we performed a reliability generalization analysis on data from the revised version (MEIM-R) and compared it with data from the original MEIM. Random-effects weighted models evaluated internal consistency coefficients (Cronbach's alpha). Reliability coefficients for the MEIM-R averaged α = .88 across 37 samples, a statistically significant increase over the average of α = .84 for the MEIM across 75 studies. Reliability coefficients for the MEIM-R did not differ across study and participant characteristics such as sample gender and ethnic composition. However, consistently lower reliability coefficients averaging α = .81 were found among participants with low levels of education, suggesting that greater attention to data reliability is warranted when evaluating the ethnic identity of individuals such as middle-school students. Future research will be needed to ascertain whether data with other measures of aspects of personal identity (e.g., racial identity, gender identity) also differ as a function of participant level of education and associated cognitive or maturation processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Perceiving and Confronting Sexism: The Causal Role of Gender Identity Salience
Wang, Katie; Dovidio, John F.
2017-01-01
Although many researchers have explored the relations among gender identification, discriminatory attributions, and intentions to challenge discrimination, few have examined the causal impact of gender identity salience on women’s actual responses to a sexist encounter. In the current study, we addressed this question by experimentally manipulating the salience of gender identity and assessing its impact on women’s decision to confront a sexist comment in a simulated online interaction. Female participants (N = 114) were randomly assigned to complete a short measure of either personal or collective self-esteem, which was designed to increase the salience of personal versus gender identity. They were then given the opportunity to confront a male interaction partner who expressed sexist views. Compared to those who were primed to focus on their personal identity, participants who were primed to focus on their gender identity perceived the interaction partner’s remarks as more sexist and were more likely to engage in confrontation. By highlighting the powerful role of subtle contextual cues in shaping women’s perceptions of, and responses to, sexism, our findings have important implications for the understanding of gender identity salience as an antecedent of prejudice confrontation. Online slides for instructors who want to use this article for teaching are available on PWQ’s website at http://journals.sagepub.com/page/pwq/suppl/index. PMID:29051685
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
2016-10-01
Reports an error in "Reliability Generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R)" by Hayley M. Herrington, Timothy B. Smith, Erika Feinauer and Derek Griner ( Journal of Counseling Psychology , Advanced Online Publication, Mar 17, 2016, np). The name of author Erika Feinauer was misspelled as Erika Feinhauer. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-13160-001.) Individuals' strength of ethnic identity has been linked with multiple positive indicators, including academic achievement and overall psychological well-being. The measure researchers use most often to assess ethnic identity, the Multigroup Ethnic Identity Measure (MEIM), underwent substantial revision in 2007. To inform scholars investigating ethnic identity, we performed a reliability generalization analysis on data from the revised version (MEIM-R) and compared it with data from the original MEIM. Random-effects weighted models evaluated internal consistency coefficients (Cronbach's alpha). Reliability coefficients for the MEIM-R averaged α = .88 across 37 samples, a statistically significant increase over the average of α = .84 for the MEIM across 75 studies. Reliability coefficients for the MEIM-R did not differ across study and participant characteristics such as sample gender and ethnic composition. However, consistently lower reliability coefficients averaging α = .81 were found among participants with low levels of education, suggesting that greater attention to data reliability is warranted when evaluating the ethnic identity of individuals such as middle-school students. Future research will be needed to ascertain whether data with other measures of aspects of personal identity (e.g., racial identity, gender identity) also differ as a function of participant level of education and associated cognitive or maturation processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kriener, Birgit; Helias, Moritz; Rotter, Stefan; Diesmann, Markus; Einevoll, Gaute T
2013-01-01
Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in "intermediate" regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of inhomogeneous activity patterns.
Kriener, Birgit; Helias, Moritz; Rotter, Stefan; Diesmann, Markus; Einevoll, Gaute T.
2014-01-01
Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in “intermediate” regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of inhomogeneous activity patterns. PMID:24501591
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
Origin and identity of Fejervarya (Anura: Dicroglossidae) on Guam
Wostl, Elijah; Smith, Eric N.; Reed, Robert
2016-01-01
We used morphological and molecular data to infer the identity and origin of frogs in the genus Fejervarya that have been introduced to the island of Guam. Mensural and meristic data were collected from 96 specimens from throughout their range on the island and a principal component analysis was used to investigate the distribution of these data in morphological space. We also amplified a fragment of the 16S ribosomal ribonucleic acid mitochondrial gene from 27 of these specimens and compared it to 63 published sequences of Fejervarya and the morphologically similar Zakerana. All examined Fejervarya from Guam are morphologically indistinguishable and share an identical haplotype. The molecular data identify them as Fejervarya cancrivora with a haplotype identical to F. cancrivora from Taiwan.
Identity statuses and psychosocial functioning in Turkish youth: a person-centered approach.
Morsunbul, Umit; Crocetti, Elisabetta; Cok, Figen; Meeus, Wim
2016-02-01
In the present study, we tested whether the five identity statuses of the original Meeus-Crocetti model could be extracted in a Turkish sample. Their three-factor model of identity was used to examine identity formation. Participants were 1201 (59.6% females) youth aged between 12 and 24 years (Mage = 17.53 years, SDage = 3.25). Findings revealed that the five identity statuses extracted in previous studies (Crocetti, Rubini, Luyckx, & Meeus, 2008; Crocetti, Schwartz, Fermani, Klimstra, & Meeus, 2012) also emerged in a sample of Turkish adolescents and emerging adults. Findings indicated that gender and age affected the distribution of the individuals among the five identity statuses. Furthermore, individuals in the five identity statuses represented distinct profiles according to personality and self characteristics, problem behaviors and well-being, and interpersonal and group relationships. Finally, the status × age interactions indicated that the searching moratorium status became more problematic with age. Implications and suggestions for future research are also discussed. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Surface plasmon enhanced cell microscopy with blocked random spatial activation
NASA Astrophysics Data System (ADS)
Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun
2016-03-01
We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.
Random walks with random velocities.
Zaburdaev, Vasily; Schmiedeberg, Michael; Stark, Holger
2008-07-01
We consider a random walk model that takes into account the velocity distribution of random walkers. Random motion with alternating velocities is inherent to various physical and biological systems. Moreover, the velocity distribution is often the first characteristic that is experimentally accessible. Here, we derive transport equations describing the dispersal process in the model and solve them analytically. The asymptotic properties of solutions are presented in the form of a phase diagram that shows all possible scaling regimes, including superdiffusive, ballistic, and superballistic motion. The theoretical results of this work are in excellent agreement with accompanying numerical simulations.
Dual representation of item positions in verbal short-term memory: Evidence for two access modes.
Lange, Elke B; Verhaeghen, Paul; Cerella, John
Memory sets of N = 1~5 digits were exposed sequentially from left-to-right across the screen, followed by N recognition probes. Probes had to be compared to memory list items on identity only (Sternberg task) or conditional on list position. Positions were probed randomly or in left-to-right order. Search functions related probe response times to set size. Random probing led to ramped, "Sternbergian" functions whose intercepts were elevated by the location requirement. Sequential probing led to flat search functions-fast responses unaffected by set size. These results suggested that items in STM could be accessed either by a slow search-on-identity followed by recovery of an associated location tag, or in a single step by following item-to-item links in study order. It is argued that this dual coding of location information occurs spontaneously at study, and that either code can be utilised at retrieval depending on test demands.
Novel approaches to pin cluster synchronization on complex dynamical networks in Lur'e forms
NASA Astrophysics Data System (ADS)
Tang, Ze; Park, Ju H.; Feng, Jianwen
2018-04-01
This paper investigates the cluster synchronization of complex dynamical networks consisted of identical or nonidentical Lur'e systems. Due to the special topology structure of the complex networks and the existence of stochastic perturbations, a kind of randomly occurring pinning controller is designed which not only synchronizes all Lur'e systems in the same cluster but also decreases the negative influence among different clusters. Firstly, based on an extended integral inequality, the convex combination theorem and S-procedure, the conditions for cluster synchronization of identical Lur'e networks are derived in a convex domain. Secondly, randomly occurring adaptive pinning controllers with two independent Bernoulli stochastic variables are designed and then sufficient conditions are obtained for the cluster synchronization on complex networks consisted of nonidentical Lur'e systems. In addition, suitable control gains for successful cluster synchronization of nonidentical Lur'e networks are acquired by designing some adaptive updating laws. Finally, we present two numerical examples to demonstrate the validity of the control scheme and the theoretical analysis.
Modeling species-abundance relationships in multi-species collections
Peng, S.; Yin, Z.; Ren, H.; Guo, Q.
2003-01-01
Species-abundance relationship is one of the most fundamental aspects of community ecology. Since Motomura first developed the geometric series model to describe the feature of community structure, ecologists have developed many other models to fit the species-abundance data in communities. These models can be classified into empirical and theoretical ones, including (1) statistical models, i.e., negative binomial distribution (and its extension), log-series distribution (and its extension), geometric distribution, lognormal distribution, Poisson-lognormal distribution, (2) niche models, i.e., geometric series, broken stick, overlapping niche, particulate niche, random assortment, dominance pre-emption, dominance decay, random fraction, weighted random fraction, composite niche, Zipf or Zipf-Mandelbrot model, and (3) dynamic models describing community dynamics and restrictive function of environment on community. These models have different characteristics and fit species-abundance data in various communities or collections. Among them, log-series distribution, lognormal distribution, geometric series, and broken stick model have been most widely used.
Dietz, Dennis C.
2014-01-01
A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070
Does prism width from the shell prismatic layer have a random distribution?
NASA Astrophysics Data System (ADS)
Vancolen, Séverine; Verrecchia, Eric
2008-10-01
A study of the distribution of the prism width inside the prismatic layer of Unio tumidus (Philipsson 1788, Diss Hist-Nat, Berling, Lundæ) from Lake Neuchâtel, Switzerland, has been conducted in order to determine whether or not this distribution is random. Measurements of 954 to 1,343 prism widths (depending on shell sample) have been made using a scanning electron microscope in backscattered electron mode. A white noise test has been applied to the distribution of prism sizes (i.e. width). It shows that there is no temporal cycle that could potentially influence their formation and growth. These results suggest that prism widths are randomly distributed, and related neither to external rings nor to environmental constraints.
Super-resolving random-Gaussian apodized photon sieve.
Sabatyan, Arash; Roshaninejad, Parisa
2012-09-10
A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.
Spatial Distribution of Phase Singularities in Optical Random Vector Waves.
De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L
2016-08-26
Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.
Oral and maxillofacial surgery - a case of mistaken identity?
van Gijn, D R
2011-01-08
There are international grumbles from those perturbed by an impending identity crisis within oral and maxillofacial surgery (OMFS). This unrest is further compounded by scattered suggestions that a name change may prove beneficial in raising the profile of OMFS. The purpose of this paper is to consider novel methods of increasing awareness of the specialty amongst the public, primary and secondary care colleagues by collecting a consensus of thoughts and opinions regarding the specialty's identity and the appropriate and holistic nomenclature of OMFS. Approximately 300 eight-point questionnaires were distributed internationally with a response rate, via both email and post, of approximately 25% (72). Thirty-two percent of respondents considered there to be an identity crisis within OMFS although just 18% felt that a specialty name change would be beneficial. The results suggest that the problem with identity relates more to incapacity to convey the message of OMFS rather than nomenclature.
A Photo Elicitation Study on Chronically Ill Adolescents’ Identity Constructions During Transition
Hanghøj, Signe; Boisen, Kirsten A.; Schmiegelow, Kjeld; Hølge-Hazelton, Bibi
2016-01-01
Adolescence is an important phase of life with increasing independence and identity development, and a vulnerable period of life for chronically ill adolescents with a high occurrence of insufficient treatment adherence. We conducted four photo elicitation focus group interviews with 14 adolescents (12-20 years) with juvenile idiopathic arthritis to investigate identity constructions during transition. Using a discourse analysis approach, six identity types were identified distributed on normal and marginal identities, which were lived either at home (home arena) or outside home with peers (out arena). Most participants positioned themselves as normal in the out arena and as ill in the home arena. Few participants positioned themselves as ill in an out arena, and they described how peers perceived this as a marginal and skewed behavior. This study contributes to a better understanding of why it can be extremely difficult to live with a chronic illness during adolescence. PMID:28462329
Random distributed feedback fiber laser at 2.1 μm.
Jin, Xiaoxi; Lou, Zhaokai; Zhang, Hanwei; Xu, Jiangming; Zhou, Pu; Liu, Zejin
2016-11-01
We demonstrate a random distributed feedback fiber laser at 2.1 μm. A high-power pulsed Tm-doped fiber laser operating at 1.94 μm with a temporal duty ratio of 30% was employed as a pump laser to increase the equivalent incident pump power. A piece of 150 m highly GeO2-doped silica fiber that provides a strong Raman gain and random distributed feedbacks was used to act as the gain medium. The maximum output power reached 0.5 W with the optical efficiency of 9%, which could be further improved by more pump power and optimized fiber length. To the best of our knowledge, this is the first demonstration of random distributed feedback fiber laser at 2 μm band based on Raman gain.
Effect of heterogeneous investments on the evolution of cooperation in spatial public goods game.
Huang, Keke; Wang, Tao; Cheng, Yuan; Zheng, Xiaoping
2015-01-01
Understanding the emergence of cooperation in spatial public goods game remains a grand challenge across disciplines. In most previous studies, it is assumed that the investments of all the cooperators are identical, and often equal to 1. However, it is worth mentioning that players are diverse and heterogeneous when choosing actions in the rapidly developing modern society and researchers have shown more interest to the heterogeneity of players recently. For modeling the heterogeneous players without loss of generality, it is assumed in this work that the investment of a cooperator is a random variable with uniform distribution, the mean value of which is equal to 1. The results of extensive numerical simulations convincingly indicate that heterogeneous investments can promote cooperation. Specifically, a large value of the variance of the random variable can decrease the two critical values for the result of behavioral evolution effectively. Moreover, the larger the variance is, the better the promotion effect will be. In addition, this article has discussed the impact of heterogeneous investments when the coevolution of both strategy and investment is taken into account. Comparing the promotion effect of coevolution of strategy and investment with that of strategy imitation only, we can conclude that the coevolution of strategy and investment decreases the asymptotic fraction of cooperators by weakening the heterogeneity of investments, which further demonstrates that heterogeneous investments can promote cooperation in spatial public goods game.
2014-01-01
Aim. The purpose of this study is to evaluate the impact, among nurses in hospital settings, of a questionnaire-based implementation intentions intervention on notification of potential ocular tissue donors to donation stakeholders. Methods. This randomized intervention was clustered at the level of hospital departments with two study arms: questionnaire-based implementation intentions intervention and control. In the intervention group, nurses were asked to plan specific actions if faced with a number of barriers when reporting potential ocular donors. The primary outcome was the potential ocular tissue donors' notification rate before and after the intervention. Analysis was based on a generalized linear model with an identity link and a binomial distribution. Results. We compared outcomes in 26 departments from 5 hospitals, 13 departments per condition. The implementation intentions intervention did not significantly increase the notification rate of ocular tissue donors (intervention: 23.1% versus control: 21.1%; χ 2 = 1.14, 2; P = 0.56). Conclusion. A single and brief implementation intentions intervention among nurses did not modify the notification rate of potential ocular tissue donors to donation stakeholders. Low exposure to the intervention was a major challenge in this study. Further studies should carefully consider a multicomponent intervention to increase exposure to this type of intervention. PMID:25132990
NASA Astrophysics Data System (ADS)
Gatto, Riccardo
2017-12-01
This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Stochastic switching of TiO2-based memristive devices with identical initial memory states
2014-01-01
In this work, we show that identical TiO2-based memristive devices that possess the same initial resistive states are only phenomenologically similar as their internal structures may vary significantly, which could render quite dissimilar switching dynamics. We experimentally demonstrated that the resistive switching of practical devices with similar initial states could occur at different programming stimuli cycles. We argue that similar memory states can be transcribed via numerous distinct active core states through the dissimilar reduced TiO2-x filamentary distributions. Our hypothesis was finally verified via simulated results of the memory state evolution, by taking into account dissimilar initial filamentary distribution. PMID:24994953
Serial dependence promotes object stability during occlusion
Liberman, Alina; Zhang, Kathy; Whitney, David
2016-01-01
Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066
Nigbur, Dennis; Lyons, Evanthia; Uzzell, David
2010-06-01
In an effort to contribute to greater understanding of norms and identity in the theory of planned behaviour, an extended model was used to predict residential kerbside recycling, with self-identity, personal norms, neighbourhood identification, and injunctive and descriptive social norms as additional predictors. Data from a field study (N=527) using questionnaire measures of predictor variables and an observational measure of recycling behaviour supported the theory. Intentions predicted behaviour, while attitudes, perceived control, and the personal norm predicted intention to recycle. The interaction between neighbourhood identification and injunctive social norms in turn predicted personal norms. Self-identity and the descriptive social norm significantly added to the original theory in predicting intentions as well as behaviour directly. A replication survey on the self-reported recycling behaviours of a random residential sample (N=264) supported the model obtained previously. These findings offer a useful extension of the theory of planned behaviour and some practicable suggestions for pro-recycling interventions. It may be productive to appeal to self-identity by making people feel like recyclers, and to stimulate both injunctive and descriptive norms in the neighbourhood.
Burt, Richard D; Thiede, Hanne
2014-11-01
Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.
Continuous Time Random Walks with memory and financial distributions
NASA Astrophysics Data System (ADS)
Montero, Miquel; Masoliver, Jaume
2017-11-01
We study financial distributions from the perspective of Continuous Time Random Walks with memory. We review some of our previous developments and apply them to financial problems. We also present some new models with memory that can be useful in characterizing tendency effects which are inherent in most markets. We also briefly study the effect on return distributions of fractional behaviors in the distribution of pausing times between successive transactions.
Psychosocial Antecedents of Unwed Motherhood among Indigent Adolescents.
ERIC Educational Resources Information Center
Kaplan, Howard B.; And Others
1979-01-01
Impoverished pregnant teenagers were distinguished from matched or random controls by an inability to cope; by identification of family, school, and peer relationships as self-devaluing experiences; by an attraction to deviant identity as a source of self-esteem; and by adoption of deviant behaviors. (Author/CP)
Dismantling Stereotypes about Latinos in STEM
ERIC Educational Resources Information Center
Hernandez, Diley; Rana, Shaheen; Rao, Analia; Usselman, Marion
2017-01-01
The present study compared the effectiveness of a self-affirmation and a role model guest lecture intervention on reducing students' perceptions of science-related social identity threat. Participants included 67 Latino high school students enrolled in a college preparation program. Students were randomly assigned either to a self-affirmation…
Harding, Stephen E.; Schuck, Peter; Abdelhameed, Ali Saber; Adams, Gary; Kök, M. Samil; Morris, Gordon A.
2011-01-01
In 1962 H. Fujita (Mathematical Theory of Sedimentation Analysis, Academic Press, New York, pp. 182–192) examined the possibility of transforming a quasi-continuous distribution g(s) of sedimentation coefficient s into a distribution f(M) of molecular weight M for linear polymers using the relation f(M) = g(s).(ds/dM) and showed that this could be done if information about the relation between s and M is available from other sources. Fujita provided the transformation based on the scaling relation s = κM0.5, where κ is taken as a constant for that particular polymer and the exponent 0.5 essentially corresponds to a randomly coiled polymer under ideal conditions. This method was successfully applied to mucus glycoproteins (S.E. Harding, Adv. Carbohyd. Chem. Biochem. 47 (1989), 345–381). We now describe an extension of the method to general conformation types via the scaling relation s = κMb, where b = 0.4–0.5 for a coil, ~0.15–0.2 for a rod and ~0.67 for a sphere. We give examples of distributions f(M) vs M obtained for polysaccharides from SEDFIT derived least squares g(s) vs s profiles (P. Schuck, Biophys. J. 78 (2000) 1606–1619) and the analytical derivative for ds/dM performed with Microcal ORIGIN. We also describe a more direct route from a direct numerical solution of the integral equation describing the molecular weight distribution problem. Both routes give identical distributions although the latter offers the advantage of being incorporated completely within SEDFIT. The method currently assumes that solutions behave ideally: sedimentation velocity has the major advantage over sedimentation equilibrium in that concentrations less than 0.2 mg/ml can be employed, and for many systems non-ideality effects can be reasonably ignored. For large, non-globular polymer systems, diffusive contributions are also likely to be small. PMID:21276851
Harding, Stephen E; Schuck, Peter; Abdelhameed, Ali Saber; Adams, Gary; Kök, M Samil; Morris, Gordon A
2011-05-01
In 1962 H. Fujita (H. Fujita, Mathematical Theory of Sedimentation Analysis, Academic Press, New York, 1962) examined the possibility of transforming a quasi-continuous distribution g(s) of sedimentation coefficient s into a distribution f(M) of molecular weight M for linear polymers using the relation f(M)=g(s)·(ds/dM) and showed that this could be done if information about the relation between s and M is available from other sources. Fujita provided the transformation based on the scaling relation s=κ(s)M(0.5), where κ(s) is taken as a constant for that particular polymer and the exponent 0.5 essentially corresponds to a randomly coiled polymer under ideal conditions. This method has been successfully applied to mucus glycoproteins (S.E. Harding, Adv. Carbohyd. Chem. Biochem. 47 (1989) 345-381). We now describe an extension of the method to general conformation types via the scaling relation s=κM(b), where b=0.4-0.5 for a coil, ∼0.15-0.2 for a rod and ∼0.67 for a sphere. We give examples of distributions f(M) versus M obtained for polysaccharides from SEDFIT derived least squares g(s) versus s profiles (P. Schuck, Biophys. J. 78 (2000) 1606-1619) and the analytical derivative for ds/dM performed with Microcal ORIGIN. We also describe a more direct route from a direct numerical solution of the integral equation describing the molecular weight distribution problem. Both routes give identical distributions although the latter offers the advantage of being incorporated completely within SEDFIT. The method currently assumes that solutions behave ideally: sedimentation velocity has the major advantage over sedimentation equilibrium in that concentrations less than 0.2mg/ml can be employed, and for many systems non-ideality effects can be reasonably ignored. For large, non-globular polymer systems, diffusive contributions are also likely to be small. Copyright © 2011 Elsevier Inc. All rights reserved.
Murayama, Hiroshi; Spencer, Michael S; Sinco, Brandy R; Palmisano, Gloria; Kieffer, Edith C
2017-06-01
Community health worker (CHW) interventions are known to be an effective strategy to improve health behaviors and outcomes in relation to diabetes, particularly for racial/ethnic communities. Although understanding the function of identity with same race/ethnicity among clients of CHW interventions could contribute to more effective program design, few studies have explored whether levels of racial/ethnic identity among participants can influence the effectiveness of CHW interventions. We tested the relationship between level of racial/ethnic identity and changes in hemoglobin A1c and diabetes self-efficacy among low-income African American and Latino adults with type 2 diabetes who participated in a CHW intervention. Data came from a randomized controlled trial of the CHW intervention with a 6-month delayed control group design for 164 African American and Latino adults in Detroit, Michigan. Racial/ethnic identity was created from two items and classified into high, moderate, and low. We combined the two arms (immediate and delayed) into one because there was no significant difference in baseline characteristics, other than age and postintervention self-efficacy, and multivariable linear regression models were applied in the analysis. Possession of high racial/ethnic identity was associated with greater improvement both in hemoglobin A1c and diabetes self-efficacy at 6 months. Moreover, among those with high hemoglobin A1c at preintervention, higher racial/ethnic identity had a greater impact on hemoglobin A1c improvement, compared with those with lower identity. This study suggests the importance of considering racial/ethnic identity of the participants in designing and operating the CHW intervention for racial/ethnic minority population.
The Miniaturization of the AFIT Random Noise Radar
2013-03-01
RANDOM NOISE RADAR I. Introduction Recent advances in technology and signal processing techniques have opened thedoor to using an ultra-wide band random...AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED...and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training
NASA Astrophysics Data System (ADS)
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
NASA Astrophysics Data System (ADS)
Pato, Mauricio P.; Oshanin, Gleb
2013-03-01
We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.
Ramirez, Jason J.; Dennhardt, Ashley A.; Baldwin, Scott A.; Murphy, James G.; Lindgren, Kristen P.
2016-01-01
Behavioral economic demand curve indices of alcohol consumption reflect decisions to consume alcohol at varying costs. Although these indices predict alcohol-related problems beyond established predictors, little is known about the determinants of elevated demand. Two cognitive constructs that may underlie alcohol demand are alcohol-approach inclinations and drinking identity. The aim of this study was to evaluate implicit and explicit measures of these constructs as predictors of alcohol demand curve indices. College student drinkers (N = 223, 59% female) completed implicit and explicit measures of drinking identity and alcohol-approach inclinations at three timepoints separated by three-month intervals, and completed the Alcohol Purchase Task to assess demand at Time 3. Given no change in our alcohol-approach inclinations and drinking identity measures over time, random intercept-only models were used to predict two demand indices: Amplitude, which represents maximum hypothetical alcohol consumption and expenditures, and Persistence, which represents sensitivity to increasing prices. When modeled separately, implicit and explicit measures of drinking identity and alcohol-approach inclinations positively predicted demand indices. When implicit and explicit measures were included in the same model, both measures of drinking identity predicted Amplitude, but only explicit drinking identity predicted Persistence. In contrast, explicit measures of alcohol-approach inclinations, but not implicit measures, predicted both demand indices. Therefore, there was more support for explicit, versus implicit, measures as unique predictors of alcohol demand. Overall, drinking identity and alcohol-approach inclinations both exhibit positive associations with alcohol demand and represent potentially modifiable cognitive constructs that may underlie elevated demand in college student drinkers. PMID:27379444
Coarse-Grained Clustering Dynamics of Heterogeneously Coupled Neurons.
Moon, Sung Joon; Cook, Katherine A; Rajendran, Karthikeyan; Kevrekidis, Ioannis G; Cisternas, Jaime; Laing, Carlo R
2015-12-01
The formation of oscillating phase clusters in a network of identical Hodgkin-Huxley neurons is studied, along with their dynamic behavior. The neurons are synaptically coupled in an all-to-all manner, yet the synaptic coupling characteristic time is heterogeneous across the connections. In a network of N neurons where this heterogeneity is characterized by a prescribed random variable, the oscillatory single-cluster state can transition-through [Formula: see text] (possibly perturbed) period-doubling and subsequent bifurcations-to a variety of multiple-cluster states. The clustering dynamic behavior is computationally studied both at the detailed and the coarse-grained levels, and a numerical approach that can enable studying the coarse-grained dynamics in a network of arbitrarily large size is suggested. Among a number of cluster states formed, double clusters, composed of nearly equal sub-network sizes are seen to be stable; interestingly, the heterogeneity parameter in each of the double-cluster components tends to be consistent with the random variable over the entire network: Given a double-cluster state, permuting the dynamical variables of the neurons can lead to a combinatorially large number of different, yet similar "fine" states that appear practically identical at the coarse-grained level. For weak heterogeneity we find that correlations rapidly develop, within each cluster, between the neuron's "identity" (its own value of the heterogeneity parameter) and its dynamical state. For single- and double-cluster states we demonstrate an effective coarse-graining approach that uses the Polynomial Chaos expansion to succinctly describe the dynamics by these quickly established "identity-state" correlations. This coarse-graining approach is utilized, within the equation-free framework, to perform efficient computations of the neuron ensemble dynamics.
Churchill, Jennifer D; Novroski, Nicole M M; King, Jonathan L; Seah, Lay Hong; Budowle, Bruce
2017-09-01
The MiSeq FGx Forensic Genomics System (Illumina) enables amplification and massively parallel sequencing of 59 STRs, 94 identity informative SNPs, 54 ancestry informative SNPs, and 24 phenotypic informative SNPs. Allele frequency and population statistics data were generated for the 172 SNP loci included in this panel on four major population groups (Chinese, African Americans, US Caucasians, and Southwest Hispanics). Single-locus and combined random match probability values were generated for the identity informative SNPs. The average combined STR and identity informative SNP random match probabilities (assuming independence) across all four populations were 1.75E-67 and 2.30E-71 with length-based and sequence-based STR alleles, respectively. Ancestry and phenotype predictions were obtained using the ForenSeq™ Universal Analysis System (UAS; Illumina) based on the ancestry informative and phenotype informative SNP profiles generated for each sample. Additionally, performance metrics, including profile completeness, read depth, relative locus performance, and allele coverage ratios, were evaluated and detailed for the 725 samples included in this study. While some genetic markers included in this panel performed notably better than others, performance across populations was generally consistent. The performance and population data included in this study support that accurate and reliable profiles were generated and provide valuable background information for laboratories considering internal validation studies and implementation. Copyright © 2017 Elsevier B.V. All rights reserved.
Second look at the spread of epidemics on networks
NASA Astrophysics Data System (ADS)
Kenah, Eben; Robins, James M.
2007-09-01
In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.
Constraints on the thermal evolution of Venus inferred from Magellan data
NASA Technical Reports Server (NTRS)
Arkani-Hamed, Jafar; Schaber, G. G.; Strom, R. G.
1992-01-01
The impact craters with diameters from 1.5 to 280 km compiled from Magellan observations indicate that the crater population on Venus has a completely spatially random distribution and the size/density distribution of craters with diameters greater than or equal to 35 km is consistent with a 'production' population with an age of 500 plus or minus 250 m.y. The similarity in size distribution from area to area indicates that the crater distribution is independent of crater size. Also, the forms of the modified craters are virtually identical to those of the pristine craters. These observations imply that Venus reset its cratering record by global resurfacing 500 m.y. ago, and resurfacing declined relatively fast. The fact that less than 40 percent of all craters have been modified and that the few volcanically embayed craters are located on localized tectonic regions indicate that only minor and localized volcanism and tectonism have occurred since the latest vigorous resurfacing event approximately 500 m.y. ago and the interior of Venus has been solid and possibly colder than Earth's. This is because the high-temperature lithosphere of Venus would facilitate upward ascending of mantle plumes and result in extensive volcanism if the venusian upper mantle were as hot as or hotter than Earth's. Therefore, the present surface morphology of Venus may provide useful constraints on the pattern of that vigorous convection, and possibly on the thermal state of the venusian mantle. We examine this possibility through numerical calculations of three-dimensional thermal convection models in a spherical shell with temperature- and pressure-dependent Newtonian viscosity, temperature-dependent thermal diffusivity, pressure-dependent thermal expansion coefficient, and time-dependent internal heat production rate solar magnitude.
Human dynamic model co-driven by interest and social identity in the MicroBlog community
NASA Astrophysics Data System (ADS)
Yan, Qiang; Yi, Lanli; Wu, Lianren
2012-02-01
This paper analyzes the behavior of releasing messages in the MicroBlog community and presents a human dynamic model co-driven by interest and social identity. According to the empirical analysis and simulation results, the messaging interval distribution follows a power law, which is mainly influenced by the degree of users' interests. Meanwhile, social identity plays a significant role regarding the change of interests and may slow down the decline of the latter. A positive correlation between social identity and numbers of comments or forwarding of messages is illustrated. Besides, the analysis of data for each 24 h reveals obvious differences between micro-blogging and website visits, email, instant communication, and the use of mobile phones, reflecting how people use small amounts of time via mobile Internet technology.
Claims and Identity: On-Premise and Cloud Solutions
NASA Astrophysics Data System (ADS)
Bertocci, Vittorio
Today's identity-management practices are often a patchwork of partial solutions, which somehow accommodate but never really integrate applications and entities separated by technology and organizational boundaries. The rise of Software as a Service (SaaS) and cloud computing, however, will force organizations to cross such boundaries so often that ad hoc solutions will simply be untenable. A new approach that tears down identity silos and supports a de-perimiterized IT by design is in order.This article will walk you through the principles of claims-based identity management, a model which addresses both traditional and cloud scenarios with the same efficacy. We will explore the most common token exchange patterns, highlighting the advantages and opportunities they offer when applied on cloud computing solutions and generic distributed systems.
ERIC Educational Resources Information Center
Alsoudi, Khalid A.
2017-01-01
The study aimed to investigate the subjects relating to the features of Jordanian cultural identity in Islamic education books for secondary and basic stages in Jordan which are (23) books through analyzing their content. For achieving the goals of the study, the researcher prepared an analysis list included a number of sub-elements distributed on…
Bacca, Cristina L.; Cochran, Bryan N.
2014-01-01
Background Lesbian, gay, bisexual, and transgender individuals are at higher risk for substance use and substance use disorders than heterosexual individuals and are more likely to seek substance use treatment, yet sexual orientation and gender identity are frequently not reported in the research literature. The purpose of this study was to identify if sexual orientation and gender identity are being reported in the recent substance use literature, and if this has changed over time. Method The PsycINFO and PubMed databases were searched for articles released in 2007 and 2012 using the term “substance abuse” and 200 articles were randomly selected from each time period and database. Articles were coded for the presence or absence of sexual orientation and gender identity information. Results Participants’ sexual orientation was reported in 3.0% and 4.9% of the 2007 and 2.3% and 6.5% of the 2012 sample, in PsycINFO and PubMed sample articles, respectively, while non-binary gender identity was reported in 0% and 1.0% of the 2007 sample and 2.3% and 1.9% of the 2012 PsycINFO and PubMed sample articles. There were no differences in rates of reporting over time. Conclusions Sexual orientation and gender identity are rarely reported in the substance abuse literature, and there has not been a change in reporting practices between 2007 and 2012. Recommendations for future investigators in reporting sexual orientation and gender identity are included. PMID:25496705
Flentje, Annesa; Bacca, Cristina L; Cochran, Bryan N
2015-02-01
Lesbian, gay, bisexual, and transgender individuals are at higher risk for substance use and substance use disorders than heterosexual individuals and are more likely to seek substance use treatment, yet sexual orientation and gender identity are frequently not reported in the research literature. The purpose of this study was to identify if sexual orientation and gender identity are being reported in the recent substance use literature, and if this has changed over time. The PsycINFO and PubMed databases were searched for articles released in 2007 and 2012 using the term "substance abuse" and 200 articles were randomly selected from each time period and database. Articles were coded for the presence or absence of sexual orientation and gender identity information. Participants' sexual orientation was reported in 3.0% and 4.9% of the 2007 and 2.3% and 6.5% of the 2012 sample, in PsycINFO and PubMed sample articles, respectively, while non-binary gender identity was reported in 0% and 1.0% of the 2007 sample and 2.3% and 1.9% of the 2012 PsycINFO and PubMed sample articles. There were no differences in rates of reporting over time. Sexual orientation and gender identity are rarely reported in the substance abuse literature, and there has not been a change in reporting practices between 2007 and 2012. Recommendations for future investigators in reporting sexual orientation and gender identity are included. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Programmable quantum random number generator without postprocessing.
Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping
2018-02-15
We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.
The Shark Random Swim - (Lévy Flight with Memory)
NASA Astrophysics Data System (ADS)
Businger, Silvia
2018-05-01
The Elephant Random Walk (ERW), first introduced by Schütz and Trimper (Phys Rev E 70:045101, 2004), is a one-dimensional simple random walk on Z having a memory about the whole past. We study the Shark Random Swim, a random walk with memory about the whole past, whose steps are α -stable distributed with α \\in (0,2] . Our aim in this work is to study the impact of the heavy tailed step distributions on the asymptotic behavior of the random walk. We shall see that, as for the ERW, the asymptotic behavior of the Shark Random Swim depends on its memory parameter p, and that a phase transition can be observed at the critical value p=1/α.
Isolation and Connectivity in Random Geometric Graphs with Self-similar Intensity Measures
NASA Astrophysics Data System (ADS)
Dettmann, Carl P.
2018-05-01
Random geometric graphs consist of randomly distributed nodes (points), with pairs of nodes within a given mutual distance linked. In the usual model the distribution of nodes is uniform on a square, and in the limit of infinitely many nodes and shrinking linking range, the number of isolated nodes is Poisson distributed, and the probability of no isolated nodes is equal to the probability the whole graph is connected. Here we examine these properties for several self-similar node distributions, including smooth and fractal, uniform and nonuniform, and finitely ramified or otherwise. We show that nonuniformity can break the Poisson distribution property, but it strengthens the link between isolation and connectivity. It also stretches out the connectivity transition. Finite ramification is another mechanism for lack of connectivity. The same considerations apply to fractal distributions as smooth, with some technical differences in evaluation of the integrals and analytical arguments.
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Neither fixed nor random: weighted least squares meta-analysis.
Stanley, T D; Doucouliagos, Hristos
2015-06-15
This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Englander, Arnold C.
2014-01-01
Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.
Contextuality in canonical systems of random variables
NASA Astrophysics Data System (ADS)
Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.
2017-10-01
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.
The triglyceride composition of 17 seed fats rich in octanoic, decanoic, or lauric acid.
Litchfield, C; Miller, E; Harlow, R D; Reiser, R
1967-07-01
Seed fats of eight species ofLauraceae (laurel family), six species ofCuphea (Lythraceae family), and three species ofUlmaceae (elm family) were extracted, and the triglycerides were isolated by preparative thin-layer chromatography. GLC of the triglycerides on a silicone column resolved 10 to 18 peaks with a 22 to 58 carbon number range for each fat. These carbon number distributions yielded considerable information about triglyceride compositions of the fats.The most interesting finding was withLaurus nobilis seed fat, which contained 58.4% lauric acid and 29.2-29.8% trilaurin. A maximum of 19.9% trilaurin would be predicted by a 1, 2, 3-random, a 1, 3-random-2-random, or a 1-random-2-random-3-random distribution of the lauric acid(3). This indicates a specificity for the biosynthesis of a simple triglyceride byLaurus nobilis seed enzymes.Cuphea lanceolata seed fat also contained more simple triglyceride (tridecanoin) than would be predicted by the fatty acid distribution theories.
Exploring the Latino Paradox: How Economic and Citizenship Status Impact Health
ERIC Educational Resources Information Center
Campbell, Kelly; Garcia, Donna M.; Granillo, Christina V.; Chavez, David V.
2012-01-01
The authors examined the contributions of economic status (ES) and citizenship status to health differences between European Americans, Latino Americans, and noncitizen Latinos. The investigation was framed using social identity and comparison theories. Southern California residents (N = 2,164) were randomly selected to complete a telephone…
Binary Colloidal Alloy Test-5: Aspheres
NASA Technical Reports Server (NTRS)
Chaikin, Paul M.; Hollingsworth, Andrew D.
2008-01-01
The Binary Colloidal Alloy Test - 5: Aspheres (BCAT-5-Aspheres) experiment photographs initially randomized colloidal samples (tiny nanoscale spheres suspended in liquid) in microgravity to determine their resulting structure over time. BCAT-5-Aspheres will study the properties of concentrated systems of small particles when they are identical, but not spherical in microgravity..
40 CFR 799.9539 - TSCA mammalian erythrocyte micronucleus test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... randomly assigned to the control and treatment groups. The animals are identified uniquely. The animals are... substance, animals in the control groups should be handled in an identical manner to animals of the treatment groups. (2) Positive controls shall produce micronuclei in vivo at exposure levels expected to...
40 CFR 799.9539 - TSCA mammalian erythrocyte micronucleus test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... randomly assigned to the control and treatment groups. The animals are identified uniquely. The animals are... substance, animals in the control groups should be handled in an identical manner to animals of the treatment groups. (2) Positive controls shall produce micronuclei in vivo at exposure levels expected to...
40 CFR 799.9539 - TSCA mammalian erythrocyte micronucleus test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... randomly assigned to the control and treatment groups. The animals are identified uniquely. The animals are... substance, animals in the control groups should be handled in an identical manner to animals of the treatment groups. (2) Positive controls shall produce micronuclei in vivo at exposure levels expected to...
Molecular Characterization of Cultivated Pawpaw (Asimina triloba) Using RAPD Markers
Hongwen Huang; Desmond R. Layne; Thomas L. Kubisiak
2003-01-01
Thirty-four extant pawpaw [Asimina triloba (L.) Dunal] cultivars and advanced selections representing a large portion of the gene pool of cultivated pawpaws were investigated using 71 randomly amplified polymorphic DNA (RAPD) markers to establish genetic identities and evaluate genetic relatedness. All 34 cultivated pawpaws were uniquely...
ERIC Educational Resources Information Center
Pahlke, Erin; Bigler, Rebecca S.; Green, Vanessa A.
2010-01-01
To examine the consequences of learning about gender discrimination, early adolescents (n = 121, aged 10-14) were randomly assigned to receive either (a) standard biographical lessons about historical figures (standard condition) or (b) nearly identical lessons that included information about gender discrimination (discrimination condition).…
Sexual Self-Concept and Sexual Risk-Taking.
ERIC Educational Resources Information Center
Breakwell, Glynis M.; Millward, Lynne J.
1997-01-01
Presents data from a survey of randomly selected adolescents (N=474) which examined differences between male and female sexual identities. Results indicate two main dimensions in male sexual self-concept: socioemotional and the relational. Female sexual self-concept revolved around concerns with assertiveness, such as controlling when sex occurs.…
Aksenov, Valerii P; Kolosov, Valeriy V; Pogutsa, Cheslav E
2014-06-10
The propagation of laser beams having orbital angular momenta (OAM) in the turbulent atmosphere is studied numerically. The variance of random wandering of these beams is investigated with the use of the Monte Carlo technique. It is found that, among various types of vortex laser beams, such as the Laguerre-Gaussian (LG) beam, modified Bessel-Gaussian beam, and hypergeometric Gaussian beam, having identical initial effective radii and OAM, the LG beam occupying the largest effective volume in space is the most stable one.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
Robustness, evolvability, and the logic of genetic regulation.
Payne, Joshua L; Moore, Jason H; Wagner, Andreas
2014-01-01
In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene's cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: For the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield identical gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, so that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype.
Probability distribution for the Gaussian curvature of the zero level surface of a random function
NASA Astrophysics Data System (ADS)
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Efficient sampling of complex network with modified random walk strategies
NASA Astrophysics Data System (ADS)
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.
Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T
2010-03-10
Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
Massa, Bruno
2016-01-01
Abstract Polichne mukonja Griffini, 1908 from Cameroon was hitherto known only from the holotype preserved at the Royal Belgian Institute of Natural Sciences, Brussels. This was probably due to the fact that the genus Polichne Stål, 1874 distributed only in Australia and Papua New Guinea. In view of this distribution, the tropical African species was therefore overlooked in the African literature. The recent discovery of two specimens at the Naturhistorisches Museum, Vienna, now provides us with a better understanding of the identity of this taxon, which is related to the African genus Catoptropteryx Karsch, 1890. Polichne mukonja is here transferred to a new genus Griffinipteryx and both taxa are proposed to be included in the new tribe Catoptropterigini. PMID:27833418
Massa, Bruno
2016-01-01
Polichne mukonja Griffini, 1908 from Cameroon was hitherto known only from the holotype preserved at the Royal Belgian Institute of Natural Sciences, Brussels. This was probably due to the fact that the genus Polichne Stål, 1874 distributed only in Australia and Papua New Guinea. In view of this distribution, the tropical African species was therefore overlooked in the African literature. The recent discovery of two specimens at the Naturhistorisches Museum, Vienna, now provides us with a better understanding of the identity of this taxon, which is related to the African genus Catoptropteryx Karsch, 1890. Polichne mukonja is here transferred to a new genus Griffinipteryx and both taxa are proposed to be included in the new tribe Catoptropterigini.
The equivalence of Darmois-Israel and distributional method for thin shells in general relativity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mansouri, R.; Khorrami, M.
1996-11-01
A distributional method to solve the Einstein{close_quote}s field equations for thin shells is formulated. The familiar field equations and jump conditions of Darmois-Israel formalism are derived. A careful analysis of the Bianchi identities shows that, for cases under consideration, they make sense as distributions and lead to jump conditions of Darmois-Israel formalism. {copyright} {ital 1996 American Institute of Physics.}
Orpinomyces cellulase CelE protein and coding sequences
Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong
2000-08-29
A CDNA designated celE cloned from Orpinomyces PC-2 encodes a polypeptide (CelE) of 477 amino acids. CelE is highly homologous to CelB of Orpinomyces (72.3% identity) and Neocallimastix (67.9% identity), and like them, it has a non-catalytic repeated peptide domain (NCRPD) at the C-terminal end. The catalytic domain of CelE is homologous to glycosyl hydrolases of Family 5, found in several anaerobic bacteria. The gene of celE is devoid of introns. The recombinant proteins CelE and CelB of Orpinomyces PC-2 randomly hydrolyze carboxymethylcellulose and cello-oligosaccharides in the pattern of endoglucanases.
NASA Astrophysics Data System (ADS)
Yan, Xing-Yu; Gong, Li-Hua; Chen, Hua-Ying; Zhou, Nan-Run
2018-05-01
A theoretical quantum key distribution scheme based on random hybrid quantum channel with EPR pairs and GHZ states is devised. In this scheme, EPR pairs and tripartite GHZ states are exploited to set up random hybrid quantum channel. Only one photon in each entangled state is necessary to run forth and back in the channel. The security of the quantum key distribution scheme is guaranteed by more than one round of eavesdropping check procedures. It is of high capacity since one particle could carry more than two bits of information via quantum dense coding.
Nöhrer, M; Zamberger, S; Primig, S; Leitner, H
2013-01-01
Atom probe tomography and transmission electron microscopy were used to examine the precipitation reaction in the austenite and ferrite phases in vanadium micro-alloyed steel after a thermo-mechanical process. It was observed that only in the ferrite phase precipitates could be found, whereupon two different types were detected. Thus, the aim was to reveal the difference between these two types. The first type was randomly distributed precipitates from V supersaturated ferrite and the second type V interphase precipitates. Not only the arrangement of the particles was different also the chemical composition. The randomly distributed precipitates consisted of V, C and N in contrast to that the interphase precipitates showed a composition of V, C and Mn. Furthermore the randomly distributed precipitates had maximum size of 20 nm and the interphase precipitates a maximum size of 15 nm. It was assumed that the reason for these differences is caused by the site in which they were formed. The randomly distributed precipitates were formed in a matrix consisting mainly of 0.05 at% C, 0.68 at% Si, 0.03 at% N, 0.145 at% V and 1.51 at% Mn. The interphase precipitates were formed in a region with a much higher C, Mn and V content. Copyright © 2013 Elsevier Ltd. All rights reserved.
Lies, Damned Lies, and Survey Self-Reports? Identity as a Cause of Measurement Bias
Brenner, Philip S.; DeLamater, John
2017-01-01
Explanations of error in survey self-reports have focused on social desirability: that respondents answer questions about normative behavior to appear prosocial to interviewers. However, this paradigm fails to explain why bias occurs even in self-administered modes like mail and web surveys. We offer an alternative explanation rooted in identity theory that focuses on measurement directiveness as a cause of bias. After completing questions about physical exercise on a web survey, respondents completed a text message–based reporting procedure, sending updates on their major activities for five days. Random assignment was then made to one of two conditions: instructions mentioned the focus of the study, physical exercise, or not. Survey responses, text updates, and records from recreation facilities were compared. Direct measures generated bias—overreporting in survey measures and reactivity in the directive text condition—but the nondirective text condition generated unbiased measures. Findings are discussed in terms of identity. PMID:29038609
Lies, Damned Lies, and Survey Self-Reports? Identity as a Cause of Measurement Bias.
Brenner, Philip S; DeLamater, John
2016-12-01
Explanations of error in survey self-reports have focused on social desirability: that respondents answer questions about normative behavior to appear prosocial to interviewers. However, this paradigm fails to explain why bias occurs even in self-administered modes like mail and web surveys. We offer an alternative explanation rooted in identity theory that focuses on measurement directiveness as a cause of bias. After completing questions about physical exercise on a web survey, respondents completed a text message-based reporting procedure, sending updates on their major activities for five days. Random assignment was then made to one of two conditions: instructions mentioned the focus of the study, physical exercise, or not. Survey responses, text updates, and records from recreation facilities were compared. Direct measures generated bias-overreporting in survey measures and reactivity in the directive text condition-but the nondirective text condition generated unbiased measures. Findings are discussed in terms of identity.
A Discordance Weighting Approach Estimating Occupational and Income Returns to Education.
Andersson, Matthew A
2018-04-23
Schooling differences between identical twins are often utilized as a natural experiment to estimate returns to education. Despite longstanding doubts about the truly random nature of within-twin-pair schooling discordance, such discordance has not yet been understood comprehensively, in terms of diverse between- and within-family peer, academic, familial, social, and health exposures. Here, a predictive analysis using national U.S. midlife twin data shows that within-pair schooling differences are endogenous to a variety of childhood exposures. Using discordance propensities, returns to education under a true natural experiment are simulated. Results for midlife occupation and income reveal differences in estimated returns to education that are statistically insignificant, suggesting that twin-based estimates of causal effects are robust. Moreover, identical and fraternal twins show similar levels of discordance endogeneity and similar responses to propensity weighting, suggesting that the identical twins may not provide demonstrably better leverage in the causal identification of educational returns.
Decay of random correlation functions for unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique
2000-10-01
Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.
A Note on the Assumption of Identical Distributions for Nonparametric Tests of Location
ERIC Educational Resources Information Center
Nordstokke, David W.; Colp, S. Mitchell
2018-01-01
Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…
Experimental extraction of an entangled photon pair from two identically decohered pairs.
Yamamoto, Takashi; Koashi, Masato; Ozdemir, Sahin Kaya; Imoto, Nobuyuki
2003-01-23
Entanglement is considered to be one of the most important resources in quantum information processing schemes, including teleportation, dense coding and entanglement-based quantum key distribution. Because entanglement cannot be generated by classical communication between distant parties, distribution of entangled particles between them is necessary. During the distribution process, entanglement between the particles is degraded by the decoherence and dissipation processes that result from unavoidable coupling with the environment. Entanglement distillation and concentration schemes are therefore needed to extract pairs with a higher degree of entanglement from these less-entangled pairs; this is accomplished using local operations and classical communication. Here we report an experimental demonstration of extraction of a polarization-entangled photon pair from two decohered photon pairs. Two polarization-entangled photon pairs are generated by spontaneous parametric down-conversion and then distributed through a channel that induces identical phase fluctuations to both pairs; this ensures that no entanglement is available as long as each pair is manipulated individually. Then, through collective local operations and classical communication we extract from the two decohered pairs a photon pair that is observed to be polarization-entangled.
Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.
Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen
2013-02-01
In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.
Healy, Meghan E; Hill, Deirdre; Berwick, Marianne; Edgar, Heather; Gross, Jessica; Hunley, Keith
2017-01-01
We examined the relationship between continental-level genetic ancestry and racial and ethnic identity in an admixed population in New Mexico with the goal of increasing our understanding of how racial and ethnic identity influence genetic substructure in admixed populations. Our sample consists of 98 New Mexicans who self-identified as Hispanic or Latino (NM-HL) and who further categorized themselves by race and ethnic subgroup membership. The genetic data consist of 270 newly-published autosomal microsatellites from the NM-HL sample and previously published data from 57 globally distributed populations, including 13 admixed samples from Central and South America. For these data, we 1) summarized the major axes of genetic variation using principal component analyses, 2) performed tests of Hardy Weinberg equilibrium, 3) compared empirical genetic ancestry distributions to those predicted under a model of admixture that lacked substructure, 4) tested the hypotheses that individuals in each sample had 100%, 0%, and the sample-mean percentage of African, European, and Native American ancestry. We found that most NM-HL identify themselves and their parents as belonging to one of two groups, conforming to a region-specific narrative that distinguishes recent immigrants from Mexico from individuals whose families have resided in New Mexico for generations and who emphasize their Spanish heritage. The "Spanish" group had significantly lower Native American ancestry and higher European ancestry than the "Mexican" group. Positive FIS values, PCA plots, and heterogeneous ancestry distributions suggest that most Central and South America admixed samples also contain substructure, and that this substructure may be related to variation in social identity. Genetic substructure appears to be common in admixed populations in the Americas and may confound attempts to identify disease-causing genes and to understand the social causes of variation in health outcomes and social inequality.
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
The effect of platelet-rich plasma on osseous healing in dogs undergoing high tibial osteotomy.
Franklin, Samuel P; Burke, Emily E; Holmes, Shannon P
2017-01-01
The purpose of this study was to investigate whether platelet-rich plasma (PRP) enhances osseous healing in conjunction with a high tibial osteotomy in dogs. Randomized controlled trial. Sixty-four client-owned pet dogs with naturally occurring rupture of the anterior cruciate ligament and that were to be treated with a high tibial osteotomy (tibial plateau leveling osteotomy) were randomized into the treatment or control group. Dogs in the treatment group received autologous platelet-rich plasma activated with calcium chloride and bovine thrombin to produce a well-formed PRP gel that was placed into the osteotomy at the time of surgery. Dogs in the control group received saline lavage of the osteotomy. All dogs had the osteotomy stabilized with identical titanium alloy implants and all aspects of the surgical procedure and post-operative care were identical among dogs of the two groups. Bone healing was assessed at exactly 28, 49, and 70 days after surgery with radiography and ultrasonography and with MRI at day 28. The effect of PRP on bone healing was assessed using a repeated measures analysis of covariance with radiographic and ultrasonographic data and using a t-test with the MRI data. Sixty dogs completed the study. There were no significant differences in age, weight, or gender distribution between the treatment and control groups. Twenty-seven dogs were treated with PRP and 33 were in the control group. The average platelet concentration of the PRP was 1.37x106 platelets/μL (±489x103) with a leukocyte concentration of 5.45x103/μL (±3.5x103). All dogs demonstrated progressive healing over time and achieved clinically successful outcomes. Time since surgery and patient age were significant predictors of radiographic healing and time since surgery was a significant predictor of ultrasonographic assessment of healing. There was no significant effect of PRP treatment as assessed radiographically, ultrasonographically, or with MRI. The PRP used in this study did not hasten osseous union in dogs treated with a high tibial osteotomy.
The effect of platelet-rich plasma on osseous healing in dogs undergoing high tibial osteotomy
Burke, Emily E.; Holmes, Shannon P.
2017-01-01
Objectives The purpose of this study was to investigate whether platelet-rich plasma (PRP) enhances osseous healing in conjunction with a high tibial osteotomy in dogs. Study design Randomized controlled trial. Methods Sixty-four client-owned pet dogs with naturally occurring rupture of the anterior cruciate ligament and that were to be treated with a high tibial osteotomy (tibial plateau leveling osteotomy) were randomized into the treatment or control group. Dogs in the treatment group received autologous platelet-rich plasma activated with calcium chloride and bovine thrombin to produce a well-formed PRP gel that was placed into the osteotomy at the time of surgery. Dogs in the control group received saline lavage of the osteotomy. All dogs had the osteotomy stabilized with identical titanium alloy implants and all aspects of the surgical procedure and post-operative care were identical among dogs of the two groups. Bone healing was assessed at exactly 28, 49, and 70 days after surgery with radiography and ultrasonography and with MRI at day 28. The effect of PRP on bone healing was assessed using a repeated measures analysis of covariance with radiographic and ultrasonographic data and using a t-test with the MRI data. Results Sixty dogs completed the study. There were no significant differences in age, weight, or gender distribution between the treatment and control groups. Twenty-seven dogs were treated with PRP and 33 were in the control group. The average platelet concentration of the PRP was 1.37x106 platelets/μL (±489x103) with a leukocyte concentration of 5.45x103/μL (±3.5x103). All dogs demonstrated progressive healing over time and achieved clinically successful outcomes. Time since surgery and patient age were significant predictors of radiographic healing and time since surgery was a significant predictor of ultrasonographic assessment of healing. There was no significant effect of PRP treatment as assessed radiographically, ultrasonographically, or with MRI. Conclusion The PRP used in this study did not hasten osseous union in dogs treated with a high tibial osteotomy. PMID:28520812
1983-03-01
the Naval Postgraduate School. As my *advisor, Prof. Gaver suggested and derived the Brownian bridge, as well as nudged me in the right direction when...the * random tour process by deriving the mean square radial distance for a random tour with arbitrary course change distribution to be: EECR I2(V / 2...random tour model, li = Iy = 8, and equation (3)x y results as expected. The notion of an arbitrary course change distribution is important because the
Anonymous authenticated communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaver, Cheryl L; Schroeppel, Richard C; Snyder, Lillian A
2007-06-19
A method of performing electronic communications between members of a group wherein the communications are authenticated as being from a member of the group and have not been altered, comprising: generating a plurality of random numbers; distributing in a digital medium the plurality of random numbers to the members of the group; publishing a hash value of contents of the digital medium; distributing to the members of the group public-key-encrypted messages each containing a same token comprising a random number; and encrypting a message with a key generated from the token and the plurality of random numbers.
Return probabilities and hitting times of random walks on sparse Erdös-Rényi graphs.
Martin, O C; Sulc, P
2010-03-01
We consider random walks on random graphs, focusing on return probabilities and hitting times for sparse Erdös-Rényi graphs. Using the tree approach, which is expected to be exact in the large graph limit, we show how to solve for the distribution of these quantities and we find that these distributions exhibit a form of self-similarity.
Mohtadi, Nicholas; Barber, Rhamona; Chan, Denise; Paolucci, Elizabeth Oddone
2016-05-01
Complications/adverse events of anterior cruciate ligament (ACL) surgery are underreported, despite pooled level 1 data in systematic reviews. All adverse events/complications occurring within a 2-year postoperative period after primary ACL reconstruction, as part of a large randomized clinical trial (RCT), were identified and described. Prospective, double-blind randomized clinical trial. Patients and the independent trained examiner were blinded to treatment allocation. University-based orthopedic referral practice. Three hundred thirty patients (14-50 years; 183 males) with isolated ACL deficiency were intraoperatively randomized to ACL reconstruction with 1 autograft type. Graft harvest and arthroscopic portal incisions were identical. Patients were equally distributed to patellar tendon (PT), quadruple-stranded hamstring tendon (HT), and double-bundle (DB) hamstring autograft ACL reconstruction. Adverse events/complications were patient reported, documented, and diagnoses confirmed. Two major complications occurred: pulmonary embolism and septic arthritis. Twenty-four patients (7.3%) required repeat surgery, including 25 separate operations: PT = 7 (6.4%), HT = 9 (8.2%), and DB = 8 (7.3%). Repeat surgery was performed for meniscal tears (3.6%; n = 12), intra-articular scarring (2.7%; n = 9), chondral pathology (0.6%; n = 2), and wound dehiscence (0.3%; n = 1). Other complications included wound problems, sensory nerve damage, muscle tendon injury, tibial periostitis, and suspected meniscal tears and chondral lesions. Overall, more complications occurred in the HT/DB groups (PT = 24; HT = 31; DB = 45), but more PT patients complained of moderate or severe kneeling pain (PT = 17; HT = 9; DB = 4) at 2 years. Overall, ACL reconstructive surgery is safe. Major complications were uncommon. Secondary surgery was necessary 7.3% of the time for complications/adverse events (excluding graft reinjury or revisions) within the first 2 years. Level 1 (therapeutic studies). This article reports on the complications/adverse events that were prospectively identified up to 2 years postoperatively, in a defined patient population participating in a large double-blind randomized clinical trial comparing PT, single-bundle hamstring, and DB hamstring reconstructions for ACL rupture.
Walther, Christian; Schweinberger, Stefan R.; Kovács, Gyula
2013-01-01
Adaptation-related aftereffects (AEs) show how face perception can be altered by recent perceptual experiences. Along with contrastive behavioural biases, modulations of the early event-related potentials (ERPs) were typically reported on categorical levels. Nevertheless, the role of the adaptor stimulus per se for face identity-specific AEs is not completely understood and was therefore investigated in the present study. Participants were adapted to faces (S1s) varying systematically on a morphing continuum between pairs of famous identities (identities A and B), or to Fourier phase-randomized faces, and had to match the subsequently presented ambiguous faces (S2s; 50/50% identity A/B) to one of the respective original faces. We found that S1s identical with or near to the original identities led to strong contrastive biases with more identity B responses following A adaptation and vice versa. In addition, the closer S1s were to the 50/50% S2 on the morphing continuum, the smaller the magnitude of the AE was. The relation between S1s and AE was, however, not linear. Additionally, stronger AEs were accompanied by faster reaction times. Analyses of the simultaneously recorded ERPs revealed categorical adaptation effects starting at 100 ms post-stimulus onset, that were most pronounced at around 125–240 ms for occipito-temporal sites over both hemispheres. S1-specific amplitude modulations were found at around 300–400 ms. Response-specific analyses of ERPs showed reduced voltages starting at around 125 ms when the S1 biased perception in a contrastive way as compared to when it did not. Our results suggest that face identity AEs do not only depend on physical differences between S1 and S2, but also on perceptual factors, such as the ambiguity of S1. Furthermore, short-term plasticity of face identity processing might work in parallel to object-category processing, and is reflected in the first 400 ms of the ERP. PMID:23990908
Shannon-entropy-based nonequilibrium "entropic" temperature of a general distribution.
Narayanan, K R; Srinivasa, A R
2012-03-01
The concept of temperature is one of the key ideas in describing the thermodynamical properties of systems. In classical statistical mechanics of ideal gases, the notion of temperature can be described in at least two different ways: the kinetic temperature (related to the average kinetic energy of the particles) and the thermodynamic temperature (related to the ratio between infinitesimal changes in entropy and energy). For the Boltzmann distribution, the two notions lead to the same result. However, for nonequilibrium phenomena, while the kinetic temperature has been commonly used both for theoretical and simulation purposes, there appears to be no corresponding general definition of thermodynamic or entropic temperature. In this paper, we consider the statistical or Shannon entropy of a system and use the "de Bruijn identity" from information theory (see Appendix A 2 for a derivation of this identity) to show that it is possible to define a "Shannon temperature" or "entropic temperature" T for a nonequilibrium system as the ratio between the average curvature of the Hamiltonian function associated with the system and the trace of the Fisher information matrix of the nonequilibrium probability distribution (see Appendix A 1 for a definition of the Fisher information). We show that this definition subsumes many other attempts at defining entropic temperatures for nonequilibrium systems and is not restricted to equilibrium or near equilibrium systems. Intuitively, the gist of our approach is to use the Shannon or Gibbs entropy of a system and make use of the relation dS=dQ(rev)/T as a definition of temperature. We achieve this by positing a statistical notion of infinitesimal heating as the addition of uncorrelated random variables (in a special way). As an example of the utility of such a definition, we obtain the nonequilibrium entropic temperature for a system satisfying the Langevin equations. For such a system, we show that while the kinetic temperature is related to the changes in the energy of the system, the entropic or Shannon temperature is related to the changes in the entropy of the system. We show that this notion, together with the well known Cramer-Rao inequality in statistics demonstrates the validity of the second law of thermodynamics for such a nonequilibrium system.
NASA Astrophysics Data System (ADS)
Li, Yue; Cherkezyan, Lusik; Zhang, Di; Almassalha, Luay; Roth, Eric; Chandler, John; Bleher, Reiner; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim
2017-02-01
Structural and biological origins of light scattering in cells and tissue are still poorly understood. We demonstrate how this problem might be addressed through the use of transmission electron microscopy (TEM). For biological samples, TEM image intensity is proportional to mass-density, and thus proportional to refractive index (RI). By calculating the autocorrelation function (ACF) of TEM image intensity of a thin-section of cells, we essentially maintain the nanoscale ACF of the 3D cellular RI distribution, given that the RI distribution is statistically isotropic. Using this nanoscale 3D RI ACF, we can simulate light scattering through biological samples, and thus guiding many optical techniques to quantify specific structures. In this work, we chose to use Partial Wave Spectroscopy (PWS) microscopy as a one of the nanoscale-sensitive optical techniques. Hela cells were prepared using standard protocol to preserve nanoscale ultrastructure, and a 50-nm slice was sectioned for TEM imaging at 6 nm resolution. The ACF was calculated for chromatin, and the PWS mean sigma was calculated by summing over the power spectral density in the visible light frequency of a random medium generated to match the ACF. A 1-µm slice adjacent to the 50-nm slice was sectioned for PWS measurement to guarantee identical chromatin structure. For 33 cells, we compared the calculated PWS mean sigma from TEM and the value measured directly, and obtained a strong correlation of 0.69. This example indicates the great potential of using TEM measured RI distribution to better understand the quantification of cellular nanostructure by optical methods.
NASA Astrophysics Data System (ADS)
Toapanta, Moisés; Mafla, Enrique; Orizaga, Antonio
2017-08-01
We analyzed the problems of security of the information of the civil registries and identification at world level that are considered strategic. The objective is to adopt the appropriate security protocols in a conceptual model in the identity management for the Civil Registry of Ecuador. In this phase, the appropriate security protocols were determined in a Conceptual Model in Identity Management with Authentication, Authorization and Auditing (AAA). We used the deductive method and exploratory research to define the appropriate security protocols to be adopted in the identity model: IPSec, DNSsec, Radius, SSL, TLS, IEEE 802.1X EAP, Set. It was a prototype of the location of the security protocols adopted in the logical design of the technological infrastructure considering the conceptual model for Identity, Authentication, Authorization, and Audit management. It was concluded that the adopted protocols are appropriate for a distributed database and should have a direct relationship with the algorithms, which allows vulnerability and risk mitigation taking into account confidentiality, integrity and availability (CIA).
Balancing the Fair Treatment of Others While Preserving Group Identity and Autonomy
Killen, Melanie; Elenbaas, Laura; Rutland, Adam
2016-01-01
Social exclusion and inclusion from groups, as well as the distribution of resources, are fundamental aspects of social life, and serve as sources of conflicts that bear on issues of fairness and equality, beginning in childhood. For the most part, research on social exclusion and allocation of resources has not focused on the issue of group membership. Yet, social exclusion from groups and the denial of resources reflect societal issues pertaining to social inequality and its counterpoint, fair treatment of others. Social inequality occurs when opportunities and resources are distributed unevenly in society, often through group norms about allocation that reflect socially defined categories of persons. This occurs at multiple levels of societal organization, from experiences of exclusion in childhood such as being left out of a play activity, to being denied access to resources as a member of a group. These situations extend to larger level experiences in the adult world concerning social exclusion from voting, for example, or participation in educational institutions. Thus, most decisions regarding social exclusion and the denial of resources involve considerations of group identity and group membership, implicitly or explicitly, which contribute to prejudice and bias, even though this has rarely been investigated in developmental science. Current research illustrating the role of group identity and autonomy regarding decision-making about social exclusion and the denial of resources is reviewed from the Social Reasoning Developmental model, one that integrates social domain theory and developmental social identity theories to investigate how children use moral, conventional, and psychological judgments to evaluate contexts reflecting group identity, group norms, and intergroup dynamics. PMID:27175034
Balancing the Fair Treatment of Others While Preserving Group Identity and Autonomy.
Killen, Melanie; Elenbaas, Laura; Rutland, Adam
2016-04-01
Social exclusion and inclusion from groups, as well as the distribution of resources, are fundamental aspects of social life, and serve as sources of conflicts that bear on issues of fairness and equality, beginning in childhood. For the most part, research on social exclusion and allocation of resources has not focused on the issue of group membership. Yet, social exclusion from groups and the denial of resources reflect societal issues pertaining to social inequality and its counterpoint, fair treatment of others. Social inequality occurs when opportunities and resources are distributed unevenly in society, often through group norms about allocation that reflect socially defined categories of persons. This occurs at multiple levels of societal organization, from experiences of exclusion in childhood such as being left out of a play activity, to being denied access to resources as a member of a group. These situations extend to larger level experiences in the adult world concerning social exclusion from voting, for example, or participation in educational institutions. Thus, most decisions regarding social exclusion and the denial of resources involve considerations of group identity and group membership, implicitly or explicitly, which contribute to prejudice and bias, even though this has rarely been investigated in developmental science. Current research illustrating the role of group identity and autonomy regarding decision-making about social exclusion and the denial of resources is reviewed from the Social Reasoning Developmental model, one that integrates social domain theory and developmental social identity theories to investigate how children use moral, conventional, and psychological judgments to evaluate contexts reflecting group identity, group norms, and intergroup dynamics.
Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers
NASA Astrophysics Data System (ADS)
Hu, Jun; Xu, Hebing; Li, Chao
2018-03-01
Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.
A stochastic Iwan-type model for joint behavior variability modeling
NASA Astrophysics Data System (ADS)
Mignolet, Marc P.; Song, Pengchao; Wang, X. Q.
2015-08-01
This paper focuses overall on the development and validation of a stochastic model to describe the dissipation and stiffness properties of a bolted joint for which experimental data is available and exhibits a large scatter. An extension of the deterministic parallel-series Iwan model for the characterization of the force-displacement behavior of joints is first carried out. This new model involves dynamic and static coefficients of friction differing from each other and a broadly defined distribution of Jenkins elements. Its applicability is next investigated using the experimental data, i.e. stiffness and dissipation measurements obtained in harmonic testing of 9 nominally identical bolted joints. The model is found to provide a very good fit of the experimental data for each bolted joint notwithstanding the significant variability of their behavior. This finding suggests that this variability can be simulated through the randomization of only the parameters of the proposed Iwan-type model. The distribution of these parameters is next selected based on maximum entropy concepts and their corresponding parameters, i.e. the hyperparameters of the model, are identified using a maximum likelihood strategy. Proceeding with a Monte Carlo simulation of this stochastic Iwan model demonstrates that the experimental data fits well within the uncertainty band corresponding to the 5th and 95th percentiles of the model predictions which well supports the adequacy of the modeling effort.
Form drag in rivers due to small-scale natural topographic features: 2. Irregular sequences
Kean, J.W.; Smith, J.D.
2006-01-01
The size, shape, and spacing of small-scale topographic features found on the boundaries of natural streams, rivers, and floodplains can be quite variable. Consequently, a procedure for determining the form drag on irregular sequences of different-sized topographic features is essential for calculating near-boundary flows and sediment transport. A method for carrying out such calculations is developed in this paper. This method builds on the work of Kean and Smith (2006), which describes the flow field for the simpler case of a regular sequence of identical topographic features. Both approaches model topographic features as two-dimensional elements with Gaussian-shaped cross sections defined in terms of three parameters. Field measurements of bank topography are used to show that (1) the magnitude of these shape parameters can vary greatly between adjacent topographic features and (2) the variability of these shape parameters follows a lognormal distribution. Simulations using an irregular set of topographic roughness elements show that the drag on an individual element is primarily controlled by the size and shape of the feature immediately upstream and that the spatial average of the boundary shear stress over a large set of randomly ordered elements is relatively insensitive to the sequence of the elements. In addition, a method to transform the topography of irregular surfaces into an equivalently rough surface of regularly spaced, identical topographic elements also is given. The methods described in this paper can be used to improve predictions of flow resistance in rivers as well as quantify bank roughness.
Cosmic Rays in Intermittent Magnetic Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shukurov, Anvar; Seta, Amit; Bushby, Paul J.
The propagation of cosmic rays in turbulent magnetic fields is a diffusive process driven by the scattering of the charged particles by random magnetic fluctuations. Such fields are usually highly intermittent, consisting of intense magnetic filaments and ribbons surrounded by weaker, unstructured fluctuations. Studies of cosmic-ray propagation have largely overlooked intermittency, instead adopting Gaussian random magnetic fields. Using test particle simulations, we calculate cosmic-ray diffusivity in intermittent, dynamo-generated magnetic fields. The results are compared with those obtained from non-intermittent magnetic fields having identical power spectra. The presence of magnetic intermittency significantly enhances cosmic-ray diffusion over a wide range of particlemore » energies. We demonstrate that the results can be interpreted in terms of a correlated random walk.« less
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
NASA Astrophysics Data System (ADS)
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
Random walk to a nonergodic equilibrium concept
NASA Astrophysics Data System (ADS)
Bel, G.; Barkai, E.
2006-01-01
Random walk models, such as the trap model, continuous time random walks, and comb models, exhibit weak ergodicity breaking, when the average waiting time is infinite. The open question is, what statistical mechanical theory replaces the canonical Boltzmann-Gibbs theory for such systems? In this paper a nonergodic equilibrium concept is investigated, for a continuous time random walk model in a potential field. In particular we show that in the nonergodic phase the distribution of the occupation time of the particle in a finite region of space approaches U- or W-shaped distributions related to the arcsine law. We show that when conditions of detailed balance are applied, these distributions depend on the partition function of the problem, thus establishing a relation between the nonergodic dynamics and canonical statistical mechanics. In the ergodic phase the distribution function of the occupation times approaches a δ function centered on the value predicted based on standard Boltzmann-Gibbs statistics. The relation of our work to single-molecule experiments is briefly discussed.
Anisotropy Induced Switching Field Distribution in High-Density Patterned Media
NASA Astrophysics Data System (ADS)
Talapatra, A.; Mohanty, J.
We present here micromagnetic study of variation of switching field distribution (SFD) in a high-density patterned media as a function of magnetic anisotropy of the system. We consider the manifold effect of magnetic anisotropy in terms of its magnitude, tilt in anisotropy axis and random arrangements of magnetic islands with random anisotropy values. Our calculation shows that reduction in anisotropy causes linear decrease in coercivity because the anisotropy energy tries to align the spins along a preferred crystallographic direction. Tilt in anisotropy axis results in decrease in squareness of the hysteresis loop and hence facilitates switching. Finally, the experimental challenges like lithographic distribution of magnetic islands, their orientation, creation of defects, etc. demanded the distribution of anisotropy to be random along with random repetitions. We have explained that the range of anisotropy values and the number of bits with different anisotropy play a key role over SFD, whereas the position of the bits and their repetitions do not show a considerable contribution.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
Effects of Age Expectations on Oncology Social Workers' Clinical Judgment
ERIC Educational Resources Information Center
Conlon, Annemarie; Choi, Namkee G.
2014-01-01
Objective: This study examined the influence of oncology social workers' expectations regarding aging (ERA) and ERA with cancer (ERAC) on their clinical judgment. Methods: Oncology social workers (N = 322) were randomly assigned to one of four vignettes describing a patient with lung cancer. The vignettes were identical except for the patent's age…
Effect of varying light intensity on welfare indices of broiler chickens grown to heavy weights
USDA-ARS?s Scientific Manuscript database
The effects of varying light-intensity on ocular, immue, fear, and leg health of broiler chickens grown to heavy weights under environmentally controlled conditions were evaluated. Four identical trials were conducted with two replications per trial. In each trial, 600 Ross 308 chicks were randomly ...
ERIC Educational Resources Information Center
Kaplan, S.; Heiligenstein, J.; West, S.; Busner, J.; Harder, D.; Dittmann, R.; Casat, C.; Wernicke, J. F.
2004-01-01
Objective: To compare the safety and efficacy of atomoxetine, a selective inhibitor of the norepinephrine transporter, versus placebo in Attention-Deficit/Hyperactivity Disorder (ADHD) patients with comorbid Oppositional Defiant Disorder (ODD). Methods: A subset analysis of 98 children from two identical, multi-site, double-blind, randomized,…
Synchronization in oscillator networks with delayed coupling: a stability criterion.
Earl, Matthew G; Strogatz, Steven H
2003-03-01
We derive a stability criterion for the synchronous state in networks of identical phase oscillators with delayed coupling. The criterion applies to any network (whether regular or random, low dimensional or high dimensional, directed or undirected) in which each oscillator receives delayed signals from k others, where k is uniform for all oscillators.
Serendipity in Teaching and Learning: The Importance of Critical Moments
ERIC Educational Resources Information Center
Giordano, Peter J.
2010-01-01
Can professors, through their casual, random remarks to students, alter lives and transform identities? The answer, based on two exploratory studies described in this article, appears to be yes. Drawing from constructive-developmental ideas of student maturation and from features of chaos theory as applied to the complex dynamic system of…
Hero/Heroine Modeling for Puerto Rican Adolescents: A Preventive Mental Health Intervention.
ERIC Educational Resources Information Center
Malgady, Robert G.; And Others
1990-01-01
Developed hero/heroine intervention based on adult Puerto Rican role models to foster ethnic identity, self-concept, and adaptive coping behavior. Screened 90 Puerto Rican eighth and ninth graders for presenting behavior problems in school and randomly assigned them to intervention or control groups. After 19 sessions, intervention significantly…
Personal Homepage Construction as an Expression of Social Development
ERIC Educational Resources Information Center
Schmitt, Kelly L.; Dayanim, Shoshana; Matthias, Stacey
2008-01-01
In 2 studies, the authors explored preadolescent and adolescent use of personal homepages in relation to mastery and identity formation. In Study 1, the authors attempted to determine the prevalence of personal homepage and online journal (blog) construction among a random sample (N = 500) of preadolescents and adolescents. Adolescents were more…
Collaborative Strategic Reading: Replications with Consideration of the Role of Fidelity
ERIC Educational Resources Information Center
Vaughn, Sharon; Roberts, Greg; Reutebuch, Colleen
2013-01-01
Collaborative Strategic Reading (CSR) is a multicomponent reading intervention aimed at improving students' text comprehension. Two 1-year randomized controlled trials were conducted to determine the efficacy of CSR with seventh and eighth grade students. The Year 2 replication study was identical to the original Year 1 study except that the…
High resolution identity testing of inactivated poliovirus vaccines
Mee, Edward T.; Minor, Philip D.; Martin, Javier
2015-01-01
Background Definitive identification of poliovirus strains in vaccines is essential for quality control, particularly where multiple wild-type and Sabin strains are produced in the same facility. Sequence-based identification provides the ultimate in identity testing and would offer several advantages over serological methods. Methods We employed random RT-PCR and high throughput sequencing to recover full-length genome sequences from monovalent and trivalent poliovirus vaccine products at various stages of the manufacturing process. Results All expected strains were detected in previously characterised products and the method permitted identification of strains comprising as little as 0.1% of sequence reads. Highly similar Mahoney and Sabin 1 strains were readily discriminated on the basis of specific variant positions. Analysis of a product known to contain incorrect strains demonstrated that the method correctly identified the contaminants. Conclusion Random RT-PCR and shotgun sequencing provided high resolution identification of vaccine components. In addition to the recovery of full-length genome sequences, the method could also be easily adapted to the characterisation of minor variant frequencies and distinction of closely related products on the basis of distinguishing consensus and low frequency polymorphisms. PMID:26049003
NASA Astrophysics Data System (ADS)
Liland, Kristian Hovde; Snipen, Lars
When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.
Decisions with Uncertain Consequences—A Total Ordering on Loss-Distributions
König, Sandra; Schauer, Stefan
2016-01-01
Decisions are often based on imprecise, uncertain or vague information. Likewise, the consequences of an action are often equally unpredictable, thus putting the decision maker into a twofold jeopardy. Assuming that the effects of an action can be modeled by a random variable, then the decision problem boils down to comparing different effects (random variables) by comparing their distribution functions. Although the full space of probability distributions cannot be ordered, a properly restricted subset of distributions can be totally ordered in a practically meaningful way. We call these loss-distributions, since they provide a substitute for the concept of loss-functions in decision theory. This article introduces the theory behind the necessary restrictions and the hereby constructible total ordering on random loss variables, which enables decisions under uncertainty of consequences. Using data obtained from simulations, we demonstrate the practical applicability of our approach. PMID:28030572
Biological monitoring of environmental quality: The use of developmental instability
Freeman, D.C.; Emlen, J.M.; Graham, J.H.; Hough, R. A.; Bannon, T.A.
1994-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.
Two Universality Properties Associated with the Monkey Model of Zipf's Law
NASA Astrophysics Data System (ADS)
Perline, Richard; Perline, Ron
2016-03-01
The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.
On the genealogy of branching random walks and of directed polymers
NASA Astrophysics Data System (ADS)
Derrida, Bernard; Mottishaw, Peter
2016-08-01
It is well known that the mean-field theory of directed polymers in a random medium exhibits replica symmetry breaking with a distribution of overlaps which consists of two delta functions. Here we show that the leading finite-size correction to this distribution of overlaps has a universal character which can be computed explicitly. Our results can also be interpreted as genealogical properties of branching Brownian motion or of branching random walks.
All optical mode controllable Er-doped random fiber laser with distributed Bragg gratings.
Zhang, W L; Ma, R; Tang, C H; Rao, Y J; Zeng, X P; Yang, Z J; Wang, Z N; Gong, Y; Wang, Y S
2015-07-01
An all-optical method to control the lasing modes of Er-doped random fiber lasers (RFLs) is proposed and demonstrated. In the RFL, an Er-doped fiber (EDF) recoded with randomly separated fiber Bragg gratings (FBG) is used as the gain medium and randomly distributed reflectors, as well as the controllable element. By combining random feedback of the FBG array and Fresnel feedback of a cleaved fiber end, multi-mode coherent random lasing is obtained with a threshold of 14 mW and power efficiency of 14.4%. Moreover, a laterally-injected control light is used to induce local gain perturbation, providing additional gain for certain random resonance modes. As a result, active mode selection of the RFL is realized by changing locations of the laser cavity that is exposed to the control light.
Random Amplification and Pyrosequencing for Identification of Novel Viral Genome Sequences
Hang, Jun; Forshey, Brett M.; Kochel, Tadeusz J.; Li, Tao; Solórzano, Víctor Fiestas; Halsey, Eric S.; Kuschner, Robert A.
2012-01-01
ssRNA viruses have high levels of genomic divergence, which can lead to difficulty in genomic characterization of new viruses using traditional PCR amplification and sequencing methods. In this study, random reverse transcription, anchored random PCR amplification, and high-throughput pyrosequencing were used to identify orthobunyavirus sequences from total RNA extracted from viral cultures of acute febrile illness specimens. Draft genome sequence for the orthobunyavirus L segment was assembled and sequentially extended using de novo assembly contigs from pyrosequencing reads and orthobunyavirus sequences in GenBank as guidance. Accuracy and continuous coverage were achieved by mapping all reads to the L segment draft sequence. Subsequently, RT-PCR and Sanger sequencing were used to complete the genome sequence. The complete L segment was found to be 6936 bases in length, encoding a 2248-aa putative RNA polymerase. The identified L segment was distinct from previously published South American orthobunyaviruses, sharing 63% and 54% identity at the nucleotide and amino acid level, respectively, with the complete Oropouche virus L segment and 73% and 81% identity at the nucleotide and amino acid level, respectively, with a partial Caraparu virus L segment. The result demonstrated the effectiveness of a sequence-independent amplification and next-generation sequencing approach for obtaining complete viral genomes from total nucleic acid extracts and its use in pathogen discovery. PMID:22468136
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
Molecular Survey of Hepatozoon canis in Red Foxes (Vulpes vulpes) from Romania.
Imre, Mirela; Dudu, Andreea; Ilie, Marius S; Morariu, Sorin; Imre, Kálmán; Dărăbuş, Gheorghe
2015-08-01
Blood samples of 119 red foxes, originating from 44 hunting grounds of 3 western counties (Arad, Hunedoara, and Timiş) of Romania, have been examined for the presence of Hepatozoon canis infection using the conventional polymerase chain reaction (PCR) of the fragment of 18S rRNA gene. Overall, 15 (12.6%) samples were found to be PCR-positive. Of the sampled hunting grounds, 29.5% (13/44) were found positive. Positive samples were recorded in all screened counties with the prevalence of 14.8% (9/61) in Arad, 9.8% (5/51) in Timiş, and 14.3% (1/7) in Hunedoara, respectively. No correlation was found (P > 0.05) between H. canis positivity and gender or territorial distribution of the infection. To confirm PCR results, 9 randomly selected amplicons were sequenced. The obtained sequences were identical to each other, confirmed the results of the conventional PCR, and showed 98-100% homology to other H. canis sequences. The results of the current survey support the role of red foxes as sylvatic reservoirs of H. canis in Romania.
Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.
Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim
2017-12-01
The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Distribution of Candida albicans genotypes among family members
NASA Technical Reports Server (NTRS)
Mehta, S. K.; Stevens, D. A.; Mishra, S. K.; Feroze, F.; Pierson, D. L.
1999-01-01
Thirty-three families (71 subjects) were screened for the presence of Candida albicans in mouthwash or stool specimens; 12 families (28 subjects) were culture-positive for this yeast. An enrichment procedure provided a twofold increase in the recovery of C. albicans from mouthwash specimens. Nine of the twelve culture-positive families had two positive members each, two families had three positive members each, and one family had four positive members. Genetic profiles were obtained by three methods: pulsed-field gel electrophoresis; restriction endonuclease analysis, and random amplification of polymorphic DNA analysis. DNA fingerprinting of C. albicans isolated from one body site three consecutive times revealed that each of the 12 families carried a distinct genotype. No two families shared the same strain, and two or more members of a family commonly shared the same strain. Intrafamily genotypic identity (i.e., each member within the family harbored the same strain) was demonstrated in six families. Genotypes of isolates from husband and wife differed from one another in five families. All three methods were satisfactory in determining genotypes; however, we concluded that restriction endonuclease analysis provided adequate resolving power.
Organic doping of rotated double layer graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, Lijin; Jaiswal, Manu, E-mail: manu.jaiswal@iitm.ac.in
2016-05-06
Charge transfer techniques have been extensively used as knobs to tune electronic properties of two- dimensional systems, such as, for the modulation of conductivity \\ mobility of single layer graphene and for opening the bandgap in bilayer graphene. The charge injected into the graphene layer shifts the Fermi level away from the minimum density of states point (Dirac point). In this work, we study charge transfer in rotated double-layer graphene achieved by the use of organic dopant, Tetracyanoquinodimethane. Naturally occurring bilayer graphene has a well-defined A-B stacking whereas in rotated double-layer the two graphene layers are randomly stacked with differentmore » rotational angles. This rotation is expected to significantly alter the interlayer interaction. Double-layer samples are prepared using layer-by-layer assembly of chemical vapor deposited single-layer graphene and they are identified by characteristic resonance in the Raman spectrum. The charge transfer and distribution of charges between the two graphene layers is studied using Raman spectroscopy and the results are compared with that for single-layer and A-B stacked bilayer graphene doped under identical conditions.« less
Prediction of hourly PM2.5 using a space-time support vector regression model
NASA Astrophysics Data System (ADS)
Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang
2018-05-01
Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
1989-08-01
Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S
Empirical scaling of the length of the longest increasing subsequences of random walks
NASA Astrophysics Data System (ADS)
Mendonça, J. Ricardo G.
2017-02-01
We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
Wills, C A; Beaupre, S J
2000-01-01
Most reptiles maintain their body temperatures within normal functional ranges through behavioral thermoregulation. Under some circumstances, thermoregulation may be a time-consuming activity, and thermoregulatory needs may impose significant constraints on the activities of ectotherms. A necessary (but not sufficient) condition for demonstrating thermoregulation is a difference between observed body temperature distributions and available operative temperature distributions. We examined operative and body temperature distributions of the timber rattlesnake (Crotalus horridus) for evidence of thermoregulation. Specifically, we compared the distribution of available operative temperatures in the environment to snake body temperatures during August and September. Operative temperatures were measured using 48 physical models that were randomly deployed in the environment and connected to a Campbell CR-21X data logger. Body temperatures (n=1,803) were recorded from 12 radiotagged snakes using temperature-sensitive telemetry. Separate randomization tests were conducted for each hour of day within each month. Actual body temperature distributions differed significantly from operative temperature distributions at most time points considered. Thus, C. horridus exhibits a necessary (but not sufficient) condition for demonstrating thermoregulation. However, unlike some desert ectotherms, we found no compelling evidence for thermal constraints on surface activity. Randomization may prove to be a powerful technique for drawing inferences about thermoregulation without reliance on studies of laboratory thermal preference.
TAILORING A FRUIT AND VEGETABLE INTERVENTION ON ETHNIC IDENTITY: RESULTS OF A RANDOMIZED STUDY
Resnicow, Ken; Davis, Rachel; Zhang, Nanhua; Saunders, Ed; Strecher, Victor; Tolsma, Dennis; Calvi, Josephine; Alexander, Gwen; Anderson, Julia; Wiese, Cheryl; Cross, William
2009-01-01
Objective Many targeted health interventions have been developed and tested with African American (AA) populations; however, AAs are a highly heterogeneous group. One characteristic that varies across AAs is Ethnic Identity (EI). Despite the recognition that AAs are heterogeneous with regard to EI, little research has been conducted on how to incorporate EI into the design of health messages and programs. Design This randomized trial tested whether tailoring a print-based fruit and vegetable (F & V) intervention based on individual EI would enhance program impact beyond that of social cognitive tailoring alone. AA adults were recruited from two integrated healthcare delivery systems, one based in the Detroit Metro area and the other in the Atlanta Metro area, and then randomized to receive three newsletters focused on F & V behavior change over three months. One set of newsletters was tailored only on demographic, behavioral, and social cognitive variables (control condition) whereas the other (experimental condition) was additionally tailored on EI. Main Outcome Measures The primary outcome for the study was F & V intake, which was assessed at baseline and three months later using the composite of two brief self-report frequency measures. Results A total of 560 eligible participants were enrolled, of which 468 provided complete 3-month follow-up data. The experimental group increased their daily mean F & V intake by 1.1 servings compared to .8 servings in the control group (p = .13). Several variables were found to interact with intervention group. For instance, Afrocentric experimental group participants showed a 1.4 increase in F & V servings per day compared to a .43 servings per day increase among Afrocentric controls (p < .05). Conclusions Although the overall between-group effects were not significant, this study confirms that AAs are a highly diverse population and that tailoring dietary messages on ethnic identity may improve intervention impact for some AA subgroups. PMID:19594262
Synchronization properties of coupled chaotic neurons: The role of random shared input
NASA Astrophysics Data System (ADS)
Kumar, Rupesh; Bilal, Shakir; Ramaswamy, Ram
2016-06-01
Spike-time correlations of neighbouring neurons depend on their intrinsic firing properties as well as on the inputs they share. Studies have shown that periodically firing neurons, when subjected to random shared input, exhibit asynchronicity. Here, we study the effect of random shared input on the synchronization of weakly coupled chaotic neurons. The cases of so-called electrical and chemical coupling are both considered, and we observe a wide range of synchronization behaviour. When subjected to identical shared random input, there is a decrease in the threshold coupling strength needed for chaotic neurons to synchronize in-phase. The system also supports lag-synchronous states, and for these, we find that shared input can cause desynchronization. We carry out a master stability function analysis for a network of such neurons and show agreement with the numerical simulations. The contrasting role of shared random input for complete and lag synchronized neurons is useful in understanding spike-time correlations observed in many areas of the brain.
Synchronization properties of coupled chaotic neurons: The role of random shared input
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Rupesh; Bilal, Shakir; Ramaswamy, Ram
Spike-time correlations of neighbouring neurons depend on their intrinsic firing properties as well as on the inputs they share. Studies have shown that periodically firing neurons, when subjected to random shared input, exhibit asynchronicity. Here, we study the effect of random shared input on the synchronization of weakly coupled chaotic neurons. The cases of so-called electrical and chemical coupling are both considered, and we observe a wide range of synchronization behaviour. When subjected to identical shared random input, there is a decrease in the threshold coupling strength needed for chaotic neurons to synchronize in-phase. The system also supports lag–synchronous states,more » and for these, we find that shared input can cause desynchronization. We carry out a master stability function analysis for a network of such neurons and show agreement with the numerical simulations. The contrasting role of shared random input for complete and lag synchronized neurons is useful in understanding spike-time correlations observed in many areas of the brain.« less
Effect of Heterogeneous Investments on the Evolution of Cooperation in Spatial Public Goods Game
Huang, Keke; Wang, Tao; Cheng, Yuan; Zheng, Xiaoping
2015-01-01
Understanding the emergence of cooperation in spatial public goods game remains a grand challenge across disciplines. In most previous studies, it is assumed that the investments of all the cooperators are identical, and often equal to 1. However, it is worth mentioning that players are diverse and heterogeneous when choosing actions in the rapidly developing modern society and researchers have shown more interest to the heterogeneity of players recently. For modeling the heterogeneous players without loss of generality, it is assumed in this work that the investment of a cooperator is a random variable with uniform distribution, the mean value of which is equal to 1. The results of extensive numerical simulations convincingly indicate that heterogeneous investments can promote cooperation. Specifically, a large value of the variance of the random variable can decrease the two critical values for the result of behavioral evolution effectively. Moreover, the larger the variance is, the better the promotion effect will be. In addition, this article has discussed the impact of heterogeneous investments when the coevolution of both strategy and investment is taken into account. Comparing the promotion effect of coevolution of strategy and investment with that of strategy imitation only, we can conclude that the coevolution of strategy and investment decreases the asymptotic fraction of cooperators by weakening the heterogeneity of investments, which further demonstrates that heterogeneous investments can promote cooperation in spatial public goods game. PMID:25781345
Bertalan, Tom; Wu, Yan; Laing, Carlo; Gear, C. William; Kevrekidis, Ioannis G.
2017-01-01
Finding accurate reduced descriptions for large, complex, dynamically evolving networks is a crucial enabler to their simulation, analysis, and ultimately design. Here, we propose and illustrate a systematic and powerful approach to obtaining good collective coarse-grained observables—variables successfully summarizing the detailed state of such networks. Finding such variables can naturally lead to successful reduced dynamic models for the networks. The main premise enabling our approach is the assumption that the behavior of a node in the network depends (after a short initial transient) on the node identity: a set of descriptors that quantify the node properties, whether intrinsic (e.g., parameters in the node evolution equations) or structural (imparted to the node by its connectivity in the particular network structure). The approach creates a natural link with modeling and “computational enabling technology” developed in the context of Uncertainty Quantification. In our case, however, we will not focus on ensembles of different realizations of a problem, each with parameters randomly selected from a distribution. We will instead study many coupled heterogeneous units, each characterized by randomly assigned (heterogeneous) parameter value(s). One could then coin the term Heterogeneity Quantification for this approach, which we illustrate through a model dynamic network consisting of coupled oscillators with one intrinsic heterogeneity (oscillator individual frequency) and one structural heterogeneity (oscillator degree in the undirected network). The computational implementation of the approach, its shortcomings and possible extensions are also discussed. PMID:28659781
Low dose rectal inoculation of rhesus macaques by SIV smE660 or SIVmac251 recapitulates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hraber, Peter; Giorgi, Elena E; Keele, Brandon
2008-01-01
We recently developed a novel strategy to identify transmitted HIV-1 genomes in acutely infected humans using single-genome amplification and a model of random virus evolution. Here, we used this approach to determine the molecular features of simian immunodeficiency virus (SIV) transmission in 18 experimentally infected Indian rhesus macaques. Animals were inoculated intrarectally (i.r.) or intravenously (i.v.) with stocks of SIVmac251 or SIVsmE660 that exhibited sequence diversity typical of early-chronic HIV-1 infection. 987 full-length SIV env sequences (median of 48 per animal) were determined from plasma virion RNA 1--5 wk after infection. i.r. inoculation was followed by productive infection by onemore » or a few viruses (median 1; range 1--5) that diversified randomly with near starlike phylogeny and a Poisson distribution of mutations. Consensus viral sequences from ramp-up and peak viremia were identical to viruses found in the inocula or differed from them by only one or a few nucleotides, providing direct evidence that early plasma viral sequences coalesce to transmitted/founder viruses. i.v. infection was >2,000-fold more efficient than i.r. infection, and viruses transmitted by either route represented the full genetic spectra of the inocula. These findings identify key similarities in mucosal transmission and early diversification between SIV and HIV-1, and thus validate the SIV-macaque mucosal infection model for HIV-1 vaccine and microbicide research.« less
On the minimum of independent geometrically distributed random variables
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.
Open quantum random walk in terms of quantum Bernoulli noise
NASA Astrophysics Data System (ADS)
Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling
2018-03-01
In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.
NASA Astrophysics Data System (ADS)
Sun, Shi-Hai; Liang, Lin-Mei
2012-08-01
Phase randomization is a very important assumption in the BB84 quantum key distribution (QKD) system with weak coherent source; otherwise, eavesdropper may spy the final key. In this Letter, a stable and monitored active phase randomization scheme for the one-way and two-way QKD system is proposed and demonstrated in experiments. Furthermore, our scheme gives an easy way for Alice to monitor the degree of randomization in experiments. Therefore, we expect our scheme to become a standard part in future QKD systems due to its secure significance and feasibility.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Reducing financial avalanches by random investments
NASA Astrophysics Data System (ADS)
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
Reducing financial avalanches by random investments.
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea; Helbing, Dirk
2013-12-01
Building on similarities between earthquakes and extreme financial events, we use a self-organized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed in a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P 500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders, randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while that of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
Enhanced backscattering through a deep random phase screen
NASA Astrophysics Data System (ADS)
Jakeman, E.
1988-10-01
The statistical properties of radiation scattered by a system consisting of a plane mirror placed in the Fresnel region behind a smoothly varying deep random-phase screen with off-axis beam illumination are studied. It is found that two mechanisms cause enhanced scattering around the backward direction, according to the mirror position with respect to the focusing plane of the screen. In all of the plane mirror geometries considered, the scattered field remains a complex Gaussian process with a spatial coherence function identical to that expected for a single screen, and a speckle size smaller than the width of backscatter enhancement.
Absorption and scattering of light by nonspherical particles. [in atmosphere
NASA Technical Reports Server (NTRS)
Bohren, C. F.
1986-01-01
Using the example of the polarization of scattered light, it is shown that the scattering matrices for identical, randomly ordered particles and for spherical particles are unequal. The spherical assumptions of Mie theory are therefore inconsistent with the random shapes and sizes of atmospheric particulates. The implications for corrections made to extinction measurements of forward scattering light are discussed. Several analytical methods are examined as potential bases for developing more accurate models, including Rayleigh theory, Fraunhoffer Diffraction theory, anomalous diffraction theory, Rayleigh-Gans theory, the separation of variables technique, the Purcell-Pennypacker method, the T-matrix method, and finite difference calculations.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Evaluation of the path integral for flow through random porous media
NASA Astrophysics Data System (ADS)
Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.
2018-04-01
We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.
USDA-ARS?s Scientific Manuscript database
Random mating (i.e., panmixis) is a fundamental assumption in quantitative genetics. In outcrossing bee-pollinated perennial forage legume polycrosses, mating is assumed by default to follow theoretical random mating. This assumption informs breeders of expected inbreeding estimates based on polycro...
29 CFR 1926.1413 - Wire rope-inspection.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... Apparent deficiencies in this category are: (A) Visible broken wires, as follows: (1) In running wire ropes: Six randomly distributed broken wires in one rope lay or three broken wires in one strand in one rope... around the rope. (2) In rotation resistant ropes: Two randomly distributed broken wires in six rope...
29 CFR 1926.1413 - Wire rope-inspection.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... Apparent deficiencies in this category are: (A) Visible broken wires, as follows: (1) In running wire ropes: Six randomly distributed broken wires in one rope lay or three broken wires in one strand in one rope... around the rope. (2) In rotation resistant ropes: Two randomly distributed broken wires in six rope...
29 CFR 1926.1413 - Wire rope-inspection.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... Apparent deficiencies in this category are: (A) Visible broken wires, as follows: (1) In running wire ropes: Six randomly distributed broken wires in one rope lay or three broken wires in one strand in one rope... around the rope. (2) In rotation resistant ropes: Two randomly distributed broken wires in six rope...
Amza, Abdou; Yu, Sun N.; Kadri, Boubacar; Nassirou, Baido; Stoller, Nicole E.; Zhou, Zhaoxia; West, Sheila K.; Bailey, Robin L.; Gaynor, Bruce D.; Keenan, Jeremy D.; Porco, Travis C.; Lietman, Thomas M.
2014-01-01
Background Antibiotic use on animals demonstrates improved growth regardless of whether or not there is clinical evidence of infectious disease. Antibiotics used for trachoma control may play an unintended benefit of improving child growth. Methodology In this sub-study of a larger randomized controlled trial, we assess anthropometry of pre-school children in a community-randomized trial of mass oral azithromycin distributions for trachoma in Niger. We measured height, weight, and mid-upper arm circumference (MUAC) in 12 communities randomized to receive annual mass azithromycin treatment of everyone versus 12 communities randomized to receive biannual mass azithromycin treatments for children, 3 years after the initial mass treatment. We collected measurements in 1,034 children aged 6–60 months of age. Principal Findings We found no difference in the prevalence of wasting among children in the 12 annually treated communities that received three mass azithromycin distributions compared to the 12 biannually treated communities that received six mass azithromycin distributions (odds ratio = 0.88, 95% confidence interval = 0.53 to 1.49). Conclusions/Significance We were unable to demonstrate a statistically significant difference in stunting, underweight, and low MUAC of pre-school children in communities randomized to annual mass azithromycin treatment or biannual mass azithromycin treatment. The role of antibiotics on child growth and nutrition remains unclear, but larger studies and longitudinal trials may help determine any association. PMID:25210836
Dependence of triboelectric charging behavior on material microstructure
NASA Astrophysics Data System (ADS)
Wang, Andrew E.; Gil, Phwey S.; Holonga, Moses; Yavuz, Zelal; Baytekin, H. Tarik; Sankaran, R. Mohan; Lacks, Daniel J.
2017-08-01
We demonstrate that differences in the microstructure of chemically identical materials can lead to distinct triboelectric charging behavior. Contact charging experiments are carried out between strained and unstrained polytetrafluoroethylene samples. Whereas charge transfer is random between samples of identical strain, when one of the samples is strained, systematic charge transfer occurs. No significant changes in the molecular-level structure of the polymer are observed by XRD and micro-Raman spectroscopy after deformation. However, the strained surfaces are found to exhibit void and craze formation spanning the nano- to micrometer length scales by molecular dynamics simulations, SEM, UV-vis spectroscopy, and naked-eye observations. This suggests that material microstructure (voids and crazes) can govern the triboelectric charging behavior of materials.
Server-Controlled Identity-Based Authenticated Key Exchange
NASA Astrophysics Data System (ADS)
Guo, Hua; Mu, Yi; Zhang, Xiyong; Li, Zhoujun
We present a threshold identity-based authenticated key exchange protocol that can be applied to an authenticated server-controlled gateway-user key exchange. The objective is to allow a user and a gateway to establish a shared session key with the permission of the back-end servers, while the back-end servers cannot obtain any information about the established session key. Our protocol has potential applications in strong access control of confidential resources. In particular, our protocol possesses the semantic security and demonstrates several highly-desirable security properties such as key privacy and transparency. We prove the security of the protocol based on the Bilinear Diffie-Hellman assumption in the random oracle model.
The genome sequence of pepper vein yellows virus (family Luteoviridae, genus Polerovirus).
Murakami, Ritsuko; Nakashima, Nobuhiko; Hinomoto, Norihide; Kawano, Shinji; Toyosato, Tetsuya
2011-05-01
The complete genome of pepper vein yellows virus (PeVYV) was sequenced using random amplification of RNA samples isolated from vector insects (Aphis gossypii) that had been given access to PeVYV-infected plants. The PeVYV genome consisted of 6244 nucleotides and had a genomic organization characteristic of members of the genus Polerovirus. PeVYV had highest amino acid sequence identities in ORF0 to ORF3 (75.9 - 91.9%) with tobacco vein distorting polerovirus, with which it was only 25.1% identical in ORF5. These sequence comparisons and previously studied biological properties indicate that PeVYV is a distinctly different virus and belongs to a new species of the genus Polerovirus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weier, Heinz-Ulrich; Arya, Suresh; Grant, Christine
The degree to which an individual organism maintains healthspan and lifespan is a function of complex interactions between genetic inheritance ('nature'), environment, including cultural inheritance (nurture) and stochastic events ('luck' or 'chance'). This task group will focus upon the role of chance because it is so poorly understood and because it appears to be of major importance in the determination of individual variations in healthspan and lifespan within species. The major factor determining variations in healthspan and lifespan between species is genetic inheritance. Broader aspects of cellular and molecular mechanisms of biological aging will also be considered, given their importancemore » for understanding the cellular and molecular basis of successful aging. The task force will consider the cellular and molecular basis for nature, nurture and chance in healthspan and life span determination. On the basis of comparisons between identical and non-identical twins, geneticists have estimated that genes control no more than about a quarter of the inter-individual differences in lifespan (Herskind 1996). Twin studies of very old individuals, however, show substantially greater genetic contributions to Healthspan (McClearn 2004; Reed 2003). The environment clearly plays an important role in the length and the quality of life. Tobacco smoke, for example has the potential to impact upon multiple body systems in ways that appear to accelerate the rates at which those systems age (Bernhard 2007). To document the role of chance events on aging, one must rigorously control both the genetic composition of an organism and its environment. This has been done to a remarkable degree in a species of nematodes, Caenorhabditis elegans (Vanfleteren 1998). The results confirm hundreds of previous studies with a wide range of species, especially those with inbred rodents housed under apparently identical but less well controlled environments. One observes wide variations in lifespan in all these studies. For the C. elegans experiments, the distributions of lifespan fit best with two parameter or three parameter logistic models and not with the classical Gompertz model nor the Weibull model. Many mutations have been shown to substantially increase lifespan in C. elegans. It is of interest, however, that the ranges of the lifespan variations among such mutant strains overlap with those of wild type strains (Kirkwood 2002). Many of these long-lived mutant strains exhibit enhanced resistance to a variety of stressors, notably heat shock. It was therefore predicted that variable degrees of response to heat shock stress might form a basis, or a partial basis, for individual variations in longevity. An initial set of experiments demonstrated that is indeed the case, at least for a transgenic construct that includes the promoter of a small heat shock gene (Rea 2005). There was a very strong correlation between the response to a heat stress and longevity, with good responding worms living longer. Strikingly, this phenotype was not heritable. The progeny of a worm showing a strong heat stress reaction exhibited the broad distribution of lifespans shown by the starting population. The heat stress reaction was therefore stochastic. The nature of the chance events that determine the reaction remains unknown. They could be related to the intrinsic instability of the transgene, making it important to repeat such experiments utilizing endogenous genes as reporters of the response to heat shock and other stressors. It could be due to epigenetic drifts in gene expression, perhaps involving random changes in gene promoters or in the state of chemical modifications to histone proteins that coat chromosomes. Such changes have indeed been observed in aging human identical twins (Fraga 2005). While those changes have been interpreted as being driven by the environment, one cannot at present rule out random variations unrelated to environmental influences. Variations in gene expression in genetically identical organisms examined under environmentally identical conditions have also been attributable to intrinsic 'noise' in fundamental molecular processes such as the transcription and translation of genes. Most such observations have been made using microorganisms (Elowitz 2002), but stochastic bursts of transcription have also been noted in mammalian cells (Raj 2006). Moreover, substantial variation in the levels at which genes are transcribed have shown to occur in mouse tissues, and that variation was shown to increase with age (Bahar 2006). Chance events are also of major significance in the determination of diseases of aging. For the case of cancer, mutations have been shown to be of major importance. A likely key to malignancy, however, is the chance event of suffering a mutation in a gene which, when mutated, now greatly enhances the general frequency of mutation.« less
A Permutation-Randomization Approach to Test the Spatial Distribution of Plant Diseases.
Lione, G; Gonthier, P
2016-01-01
The analysis of the spatial distribution of plant diseases requires the availability of trustworthy geostatistical methods. The mean distance tests (MDT) are here proposed as a series of permutation and randomization tests to assess the spatial distribution of plant diseases when the variable of phytopathological interest is categorical. A user-friendly software to perform the tests is provided. Estimates of power and type I error, obtained with Monte Carlo simulations, showed the reliability of the MDT (power > 0.80; type I error < 0.05). A biological validation on the spatial distribution of spores of two fungal pathogens causing root rot on conifers was successfully performed by verifying the consistency between the MDT responses and previously published data. An application of the MDT was carried out to analyze the relation between the plantation density and the distribution of the infection of Gnomoniopsis castanea, an emerging fungal pathogen causing nut rot on sweet chestnut. Trees carrying nuts infected by the pathogen were randomly distributed in areas with different plantation densities, suggesting that the distribution of G. castanea was not related to the plantation density. The MDT could be used to analyze the spatial distribution of plant diseases both in agricultural and natural ecosystems.
On the origin of cosmic rays. [gamma rays and supernova remnants
NASA Technical Reports Server (NTRS)
Stecker, F. W.
1975-01-01
Using Recent surveys of molecular clouds and gamma rays in the galaxy, it is possible to determine the distribution of 1 to 10 GeV cosmic-ray nucleons in the galaxy. This distribution appears to be identical to the supernova remnant distribution to within experimental error and provides strong support for the hypothesis that supernovae produce most of the observed cosmic rays. This distribution resembles that of OB associations of average age approximately 30 million years suggesting that cosmic rays are produced by population objects about 30 million years after their birth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ALAM,TODD M.
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
Unbiased estimators for spatial distribution functions of classical fluids
NASA Astrophysics Data System (ADS)
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Sugavanam, S; Yan, Z; Kamynin, V; Kurkov, A S; Zhang, L; Churkin, D V
2014-02-10
Multiwavelength lasing in the random distributed feedback fiber laser is demonstrated by employing an all fiber Lyot filter. Stable multiwavelength generation is obtained, with each line exhibiting sub-nanometer line-widths. A flat power distribution over multiple lines is obtained, which indicates that the power between lines is redistributed in nonlinear mixing processes. The multiwavelength generation is observed both in first and second Stokes waves.
Grinang, Jongkar; Ng, Peter K L
2015-04-10
Four new species of semiterrestrial gecarcinucid crabs are described from limestone and sandstone habitats in southwestern Sarawak, Malaysia: Terrathelphusa aglaia n. sp., T. cerina n. sp., T. kundong n. sp., and T. mas n. sp. The taxonomy of T. kuchingensis (Nobili, 1901) is discussed, its precise identity ascertained from fresh material, and its actual distribution determined. This increases the number of Terrathelphusa species in Borneo to eight.
Directed Random Markets: Connectivity Determines Money
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; López-Ruiz, Ricardo
2013-12-01
Boltzmann-Gibbs (BG) distribution arises as the statistical equilibrium probability distribution of money among the agents of a closed economic system where random and undirected exchanges are allowed. When considering a model with uniform savings in the exchanges, the final distribution is close to the gamma family. In this paper, we implement these exchange rules on networks and we find that these stationary probability distributions are robust and they are not affected by the topology of the underlying network. We introduce a new family of interactions: random but directed ones. In this case, it is found the topology to be determinant and the mean money per economic agent is related to the degree of the node representing the agent in the network. The relation between the mean money per economic agent and its degree is shown to be linear.
Effects of ignition location models on the burn patterns of simulated wildfires
Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.
2011-01-01
Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.
Models for the hotspot distribution
NASA Technical Reports Server (NTRS)
Jurdy, Donna M.; Stefanick, Michael
1990-01-01
Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.
ERIC Educational Resources Information Center
Tabatadze, Shalva; Gorgadze, Natia
2018-01-01
Purpose: The purpose of this paper is to assess the intercultural sensitivity of students in teacher educational programs at higher education institutes (HEIs) in Georgia. Design/methodology/approach: This research explored the intercultural sensitivity among 355 randomly selected students in teacher education programs at higher education…
ERIC Educational Resources Information Center
Kane, Michael
2004-01-01
BSW and MSW students randomly completed one of two vignettes that were identical with the exception of the age of the vignette's subject. Following the vignette, respondents responded to 16 bio-psycho-social assessment and intervention items relating to health, illness, aging, and death. The multivariate analysis of variance was significant…
ERIC Educational Resources Information Center
Kane, Michael N.
2004-01-01
BSW and MSW students randomly completed one of two vignettes that were identical with the exception of the age of the vignette's subject. Following the vignette, respondents responded to 16 bio-psycho-social assessment and intervention items relating to health, illness, aging, and death. The multivariate analysis of variance was significant…
ERIC Educational Resources Information Center
Karaaslan, Ozcan; Mahoney, Gerald
2015-01-01
Mediational analyses were conducted with data from two small randomized control trials of the Responsive Teaching (RT) parent-mediated developmental intervention which used nearly identical intervention and control procedures. The purpose of these analyses was to determine whether or how the changes in maternal responsiveness and children's…
ERIC Educational Resources Information Center
Rosch, David M.; Collier, Daniel; Thompson, Sara E.
2015-01-01
This exploratory study examined the motivation to lead of a random sample of 1,338 undergraduate students to determine the degree to which motivation to lead can predict leadership behaviors. Results suggested that students' internal self-identity as a leader positively predicted behavior, while their "social normative" motivation to…
40 CFR 799.9538 - TSCA mammalian bone marrow chromosomal aberration test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... be randomly assigned to the control and treatment groups. Cages should be arranged in such a way that... in the control groups should be handled in an identical manner to the animals in the treated groups... of animals. Each treated and control group shall include at least 5 analyzable animals per sex. If at...
40 CFR 799.9538 - TSCA mammalian bone marrow chromosomal aberration test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... be randomly assigned to the control and treatment groups. Cages should be arranged in such a way that... in the control groups should be handled in an identical manner to the animals in the treated groups... of animals. Each treated and control group shall include at least 5 analyzable animals per sex. If at...