DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel
2015-06-15
We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.
Random matrices and the New York City subway system
NASA Astrophysics Data System (ADS)
Jagannath, Aukosh; Trogdon, Thomas
2017-09-01
We analyze subway arrival times in the New York City subway system. We find regimes where the gaps between trains are well modeled by (unitarily invariant) random matrix statistics and Poisson statistics. The departure from random matrix statistics is captured by the value of the Coulomb potential along the subway route. This departure becomes more pronounced as trains make more stops.
Spectral statistics of random geometric graphs
NASA Astrophysics Data System (ADS)
Dettmann, C. P.; Georgiou, O.; Knight, G.
2017-04-01
We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
NASA Astrophysics Data System (ADS)
Siegel, Z.; Siegel, Edward Carl-Ludwig
2011-03-01
RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!
Data-driven probability concentration and sampling on manifold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2016-09-15
A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less
Intermediate quantum maps for quantum computation
NASA Astrophysics Data System (ADS)
Giraud, O.; Georgeot, B.
2005-10-01
We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model
NASA Astrophysics Data System (ADS)
Kanazawa, Takuya; Kieburg, Mario
2018-06-01
We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Semistochastic approach to many electron systems
NASA Astrophysics Data System (ADS)
Grossjean, M. K.; Grossjean, M. F.; Schulten, K.; Tavan, P.
1992-08-01
A Pariser-Parr-Pople (PPP) Hamiltonian of an 8π electron system of the molecule octatetraene, represented in a configuration-interaction basis (CI basis), is analyzed with respect to the statistical properties of its matrix elements. Based on this analysis we develop an effective Hamiltonian, which represents virtual excitations by a Gaussian orthogonal ensemble (GOE). We also examine numerical approaches which replace the original Hamiltonian by a semistochastically generated CI matrix. In that CI matrix, the matrix elements of high energy excitations are choosen randomly according to distributions reflecting the statistics of the original CI matrix.
Anderson Localization in Quark-Gluon Plasma
NASA Astrophysics Data System (ADS)
Kovács, Tamás G.; Pittler, Ferenc
2010-11-01
At low temperature the low end of the QCD Dirac spectrum is well described by chiral random matrix theory. In contrast, at high temperature there is no similar statistical description of the spectrum. We show that at high temperature the lowest part of the spectrum consists of a band of statistically uncorrelated eigenvalues obeying essentially Poisson statistics and the corresponding eigenvectors are extremely localized. Going up in the spectrum the spectral density rapidly increases and the eigenvectors become more and more delocalized. At the same time the spectral statistics gradually crosses over to the bulk statistics expected from the corresponding random matrix ensemble. This phenomenon is reminiscent of Anderson localization in disordered conductors. Our findings are based on staggered Dirac spectra in quenched lattice simulations with the SU(2) gauge group.
Quantifying economic fluctuations by adapting methods of statistical physics
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki
2001-09-01
The first focus of this thesis is the investigation of cross-correlations between the price fluctuations of different stocks using the conceptual framework of random matrix theory (RMT), developed in physics to describe the statistical properties of energy-level spectra of complex nuclei. RMT makes predictions for the statistical properties of matrices that are universal, i.e., do not depend on the interactions between the elements comprising the system. In physical systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system so this framework is of potential value if applied to economic systems. This thesis compares the statistics of cross-correlation matrix
Tensor Minkowski Functionals for random fields on the sphere
NASA Astrophysics Data System (ADS)
Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom
2017-12-01
We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.
NASA Astrophysics Data System (ADS)
Méndez-Bermúdez, J. A.; Gopar, Victor A.; Varga, Imre
2010-09-01
We study numerically scattering and transport statistical properties of the one-dimensional Anderson model at the metal-insulator transition described by the power-law banded random matrix (PBRM) model at criticality. Within a scattering approach to electronic transport, we concentrate on the case of a small number of single-channel attached leads. We observe a smooth crossover from localized to delocalized behavior in the average-scattering matrix elements, the conductance probability distribution, the variance of the conductance, and the shot noise power by varying b (the effective bandwidth of the PBRM model) from small (b≪1) to large (b>1) values. We contrast our results with analytic random matrix theory predictions which are expected to be recovered in the limit b→∞ . We also compare our results for the PBRM model with those for the three-dimensional (3D) Anderson model at criticality, finding that the PBRM model with bɛ[0.2,0.4] reproduces well the scattering and transport properties of the 3D Anderson model.
Universality in chaos: Lyapunov spectrum and random matrix theory.
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Universality in chaos: Lyapunov spectrum and random matrix theory
NASA Astrophysics Data System (ADS)
Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki
2018-02-01
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices
NASA Astrophysics Data System (ADS)
Passemier, Damien; McKay, Matthew R.; Chen, Yang
2015-07-01
Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.
Free Fermions and the Classical Compact Groups
NASA Astrophysics Data System (ADS)
Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil
2018-06-01
There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.
A random matrix approach to credit risk.
Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
A Random Matrix Approach to Credit Risk
Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F
2011-05-20
Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
Semiclassical matrix model for quantum chaotic transport with time-reversal symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel, E-mail: marcel.novaes@gmail.com
2015-10-15
We show that the semiclassical approach to chaotic quantum transport in the presence of time-reversal symmetry can be described by a matrix model. In other words, we construct a matrix integral whose perturbative expansion satisfies the semiclassical diagrammatic rules for the calculation of transport statistics. One of the virtues of this approach is that it leads very naturally to the semiclassical derivation of universal predictions from random matrix theory.
NASA Astrophysics Data System (ADS)
Antoniuk, Oleg; Sprik, Rudolf
2010-03-01
We developed a random matrix model to describe the statistics of resonances in an acoustic cavity with broken time-reversal invariance. Time-reversal invariance braking is achieved by connecting an amplified feedback loop between two transducers on the surface of the cavity. The model is based on approach [1] that describes time- reversal properties of the cavity without a feedback loop. Statistics of eigenvalues (nearest neighbor resonance spacing distributions and spectral rigidity) has been calculated and compared to the statistics obtained from our experimental data. Experiments have been performed on aluminum block of chaotic shape confining ultrasound waves. [1] Carsten Draeger and Mathias Fink, One-channel time- reversal in chaotic cavities: Theoretical limits, Journal of Acoustical Society of America, vol. 105, Nr. 2, pp. 611-617 (1999)
The supersymmetric method in random matrix theory and applications to QCD
NASA Astrophysics Data System (ADS)
Verbaarschot, Jacobus
2004-12-01
The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.
Spectral statistics of the acoustic stadium
NASA Astrophysics Data System (ADS)
Méndez-Sánchez, R. A.; Báez, G.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We calculate the normal-mode frequencies and wave amplitudes of the two-dimensional acoustical stadium. We also obtain the statistical properties of the acoustical spectrum and show that they agree with the results given by random matrix theory. Some normal-mode wave amplitudes showing scarring are presented.
NASA Astrophysics Data System (ADS)
Gros, J.-B.; Kuhl, U.; Legrand, O.; Mortessagne, F.
2016-03-01
The effective Hamiltonian formalism is extended to vectorial electromagnetic waves in order to describe statistical properties of the field in reverberation chambers. The latter are commonly used in electromagnetic compatibility tests. As a first step, the distribution of wave intensities in chaotic systems with varying opening in the weak coupling limit for scalar quantum waves is derived by means of random matrix theory. In this limit the only parameters are the modal overlap and the number of open channels. Using the extended effective Hamiltonian, we describe the intensity statistics of the vectorial electromagnetic eigenmodes of lossy reverberation chambers. Finally, the typical quantity of interest in such chambers, namely, the distribution of the electromagnetic response, is discussed. By determining the distribution of the phase rigidity, describing the coupling to the environment, using random matrix numerical data, we find good agreement between the theoretical prediction and numerical calculations of the response.
NASA Astrophysics Data System (ADS)
Ma, Ning; Zhao, Juan; Hanson, Steen G.; Takeda, Mitsuo; Wang, Wei
2016-10-01
Laser speckle has received extensive studies of its basic properties and associated applications. In the majority of research on speckle phenomena, the random optical field has been treated as a scalar optical field, and the main interest has been concentrated on their statistical properties and applications of its intensity distribution. Recently, statistical properties of random electric vector fields referred to as Polarization Speckle have come to attract new interest because of their importance in a variety of areas with practical applications such as biomedical optics and optical metrology. Statistical phenomena of random electric vector fields have close relevance to the theories of speckles, polarization and coherence theory. In this paper, we investigate the correlation tensor for stochastic electromagnetic fields modulated by a depolarizer consisting of a rough-surfaced retardation plate. Under the assumption that the microstructure of the scattering surface on the depolarizer is as fine as to be unresolvable in our observation region, we have derived a relationship between the polarization matrix/coherency matrix for the modulated electric fields behind the rough-surfaced retardation plate and the coherence matrix under the free space geometry. This relation is regarded as entirely analogous to the van Cittert-Zernike theorem of classical coherence theory. Within the paraxial approximation as represented by the ABCD-matrix formalism, the three-dimensional structure of the generated polarization speckle is investigated based on the correlation tensor, indicating a typical carrot structure with a much longer axial dimension than the extent in its transverse dimension.
2011-01-01
Background Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. Methods The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. Results The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. Conclusions The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance. PMID:21599963
Constructing acoustic timefronts using random matrix theory.
Hegewisch, Katherine C; Tomsovic, Steven
2013-10-01
In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Kang, James; An, Howard; Hilibrand, Alan; Yoon, S Tim; Kavanagh, Eoin; Boden, Scott
2012-05-20
Prospective multicenter randomized clinical trail. The goal of our 2-year prospective study was to perform a randomized clinical trial comparing the outcomes of Grafton demineralized bone matrix (DBM) Matrix with local bone with that of iliac crest bone graft (ICBG) in a single-level instrumented posterior lumbar fusion. There has been extensive research and development in identifying a suitable substitute to replace autologous ICBG that is associated with known morbidities. DBMs are a class of commercially available grafting agents that are prepared from allograft bone. Many such products have been commercially available for clinical use; however, their efficacy for spine fusion has been mostly based on anecdotal evidence rather than randomized controlled clinical trials. Forty-six patients were randomly assigned (2:1) to receive Grafton DBM Matrix with local bone (30 patients) or autologous ICBG (16 patients). The mean age was 64 (females [F] = 21, males [M] = 9) in the DBM group and 65 (F = 9, M = 5) in the ICBG group. An independent radiologist evaluated plain radiographs and computed tomographic scans at 6-month, 1-year, and 2-year time points. Clinical outcomes were measured using Oswestry Disability Index (ODI) and Medical Outcomes Study 36-Item Short Form Health Survey. Forty-one patients (DBM = 28 and ICBG = 13) completed the 2-year follow-up. Final fusion rates were 86% (Grafton Matrix) versus 92% (ICBG) (P = 1.0 not significant). The Grafton group showed slightly better improvement in ODI score than the ICBG group at the final 2-year follow-up (Grafton [16.2] and ICBG [22.7]); however, the difference was not statistically significant (P = 0.2346 at 24 mo). Grafton showed consistently higher physical function scores at 24 months; however, differences were not statistically significant (P = 0.0823). Similar improvements in the physical component summary scores were seen in both the Grafton and ICBG groups. There was a statistically significant greater mean intraoperative blood loss in the ICBG group than in the Grafton group (P < 0.0031). At 2-year follow-up, subjects who were randomized to Grafton Matrix and local bone achieved an 86% overall fusion rate and improvements in clinical outcomes that were comparable with those in the ICBG group.
An information hidden model holding cover distributions
NASA Astrophysics Data System (ADS)
Fu, Min; Cai, Chao; Dai, Zuxu
2018-03-01
The goal of steganography is to embed secret data into a cover so no one apart from the sender and intended recipients can find the secret data. Usually, the way the cover changing was decided by a hidden function. There were no existing model could be used to find an optimal function which can greatly reduce the distortion the cover suffered. This paper considers the cover carrying secret message as a random Markov chain, taking the advantages of a deterministic relation between initial distributions and transferring matrix of the Markov chain, and takes the transferring matrix as a constriction to decrease statistical distortion the cover suffered in the process of information hiding. Furthermore, a hidden function is designed and the transferring matrix is also presented to be a matrix from the original cover to the stego cover. Experiment results show that the new model preserves a consistent statistical characterizations of original and stego cover.
On Fluctuations of Eigenvalues of Random Band Matrices
NASA Astrophysics Data System (ADS)
Shcherbina, M.
2015-10-01
We consider the fluctuations of linear eigenvalue statistics of random band matrices whose entries have the form with i.i.d. possessing the th moment, where the function u has a finite support , so that M has only nonzero diagonals. The parameter b (called the bandwidth) is assumed to grow with n in a way such that . Without any additional assumptions on the growth of b we prove CLT for linear eigenvalue statistics for a rather wide class of test functions. Thus we improve and generalize the results of the previous papers (Jana et al., arXiv:1412.2445; Li et al. Random Matrices 2:04, 2013), where CLT was proven under the assumption . Moreover, we develop a method which allows to prove automatically the CLT for linear eigenvalue statistics of the smooth test functions for almost all classical models of random matrix theory: deformed Wigner and sample covariance matrices, sparse matrices, diluted random matrices, matrices with heavy tales etc.
2013-12-14
population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC
Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays
NASA Astrophysics Data System (ADS)
Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.
2014-12-01
Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.
Statistical properties of the stock and credit market: RMT and network topology
NASA Astrophysics Data System (ADS)
Lim, Kyuseong; Kim, Min Jae; Kim, Sehyun; Kim, Soo Yong
We analyzed the dependence structure of the credit and stock market using random matrix theory and network topology. The dynamics of both markets have been spotlighted throughout the subprime crisis. In this study, we compared these two markets in view of the market-wide effect from random matrix theory and eigenvalue analysis. We found that the largest eigenvalue of the credit market as a whole preceded that of the stock market in the beginning of the financial crisis and that of two markets tended to be synchronized after the crisis. The correlation between the companies of both markets became considerably stronger after the crisis as well.
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory
NASA Astrophysics Data System (ADS)
Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick
2018-05-01
For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.
Schweiner, Frank; Laturner, Jeanine; Main, Jörg; Wunner, Günter
2017-11-01
Until now only for specific crossovers between Poissonian statistics (P), the statistics of a Gaussian orthogonal ensemble (GOE), or the statistics of a Gaussian unitary ensemble (GUE) have analytical formulas for the level spacing distribution function been derived within random matrix theory. We investigate arbitrary crossovers in the triangle between all three statistics. To this aim we propose an according formula for the level spacing distribution function depending on two parameters. Comparing the behavior of our formula for the special cases of P→GUE, P→GOE, and GOE→GUE with the results from random matrix theory, we prove that these crossovers are described reasonably. Recent investigations by F. Schweiner et al. [Phys. Rev. E 95, 062205 (2017)2470-004510.1103/PhysRevE.95.062205] have shown that the Hamiltonian of magnetoexcitons in cubic semiconductors can exhibit all three statistics in dependence on the system parameters. Evaluating the numerical results for magnetoexcitons in dependence on the excitation energy and on a parameter connected with the cubic valence band structure and comparing the results with the formula proposed allows us to distinguish between regular and chaotic behavior as well as between existent or broken antiunitary symmetries. Increasing one of the two parameters, transitions between different crossovers, e.g., from the P→GOE to the P→GUE crossover, are observed and discussed.
Finding a Hadamard matrix by simulated annealing of spin vectors
NASA Astrophysics Data System (ADS)
Bayu Suksmono, Andriyan
2017-05-01
Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.
Nonlinear wave chaos: statistics of second harmonic fields.
Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M
2017-10-01
Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.
Statistical properties of cross-correlation in the Korean stock market
NASA Astrophysics Data System (ADS)
Oh, G.; Eom, C.; Wang, F.; Jung, W.-S.; Stanley, H. E.; Kim, S.
2011-01-01
We investigate the statistical properties of the cross-correlation matrix between individual stocks traded in the Korean stock market using the random matrix theory (RMT) and observe how these affect the portfolio weights in the Markowitz portfolio theory. We find that the distribution of the cross-correlation matrix is positively skewed and changes over time. We find that the eigenvalue distribution of original cross-correlation matrix deviates from the eigenvalues predicted by the RMT, and the largest eigenvalue is 52 times larger than the maximum value among the eigenvalues predicted by the RMT. The β_{473} coefficient, which reflect the largest eigenvalue property, is 0.8, while one of the eigenvalues in the RMT is approximately zero. Notably, we show that the entropy function E(σ) with the portfolio risk σ for the original and filtered cross-correlation matrices are consistent with a power-law function, E( σ) σ^{-γ}, with the exponent γ 2.92 and those for Asian currency crisis decreases significantly.
Spectral statistics and scattering resonances of complex primes arrays
NASA Astrophysics Data System (ADS)
Wang, Ren; Pinheiro, Felipe A.; Dal Negro, Luca
2018-01-01
We introduce a class of aperiodic arrays of electric dipoles generated from the distribution of prime numbers in complex quadratic fields (Eisenstein and Gaussian primes) as well as quaternion primes (Hurwitz and Lifschitz primes), and study the nature of their scattering resonances using the vectorial Green's matrix method. In these systems we demonstrate several distinctive spectral properties, such as the absence of level repulsion in the strongly scattering regime, critical statistics of level spacings, and the existence of critical modes, which are extended fractal modes with long lifetimes not supported by either random or periodic systems. Moreover, we show that one can predict important physical properties, such as the existence spectral gaps, by analyzing the eigenvalue distribution of the Green's matrix of the arrays in the complex plane. Our results unveil the importance of aperiodic correlations in prime number arrays for the engineering of gapped photonic media that support far richer mode localization and spectral properties compared to usual periodic and random media.
Path statistics, memory, and coarse-graining of continuous-time random walks on networks
Kion-Crosby, Willow; Morozov, Alexandre V.
2015-01-01
Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868
Quantifying fluctuations in economic systems by adapting methods of statistical physics
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Gopikrishnan, P.; Plerou, V.; Amaral, L. A. N.
2000-12-01
The emerging subfield of econophysics explores the degree to which certain concepts and methods from statistical physics can be appropriately modified and adapted to provide new insights into questions that have been the focus of interest in the economics community. Here we give a brief overview of two examples of research topics that are receiving recent attention. A first topic is the characterization of the dynamics of stock price fluctuations. For example, we investigate the relation between trading activity - measured by the number of transactions NΔ t - and the price change GΔ t for a given stock, over a time interval [t, t+ Δt] . We relate the time-dependent standard deviation of price fluctuations - volatility - to two microscopic quantities: the number of transactions NΔ t in Δ t and the variance WΔ t2 of the price changes for all transactions in Δ t. Our work indicates that while the pronounced tails in the distribution of price fluctuations arise from WΔ t, the long-range correlations found in ∣ GΔ t∣ are largely due to NΔ t. We also investigate the relation between price fluctuations and the number of shares QΔ t traded in Δ t. We find that the distribution of QΔ t is consistent with a stable Lévy distribution, suggesting a Lévy scaling relationship between QΔ t and NΔ t, which would provide one explanation for volume-volatility co-movement. A second topic concerns cross-correlations between the price fluctuations of different stocks. We adapt a conceptual framework, random matrix theory (RMT), first used in physics to interpret statistical properties of nuclear energy spectra. RMT makes predictions for the statistical properties of matrices that are universal, that is, do not depend on the interactions between the elements comprising the system. In physics systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system, so this framework can be of potential value if applied to economic systems. We discuss a systematic comparison between the statistics of the cross-correlation matrix C - whose elements Cij are the correlation-coefficients between the returns of stock i and j - and that of a random matrix having the same symmetry properties. Our work suggests that RMT can be used to distinguish random and non-random parts of C; the non-random part of C, which deviates from RMT results provides information regarding genuine cross-correlations between stocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko
2015-11-10
This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedommore » ν a is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.« less
Comparison between two surgical techniques for root coverage with an acellular dermal matrix graft.
Andrade, Patrícia F; Felipe, Maria Emília M C; Novaes, Arthur B; Souza, Sérgio L S; Taba, Mário; Palioto, Daniela B; Grisi, Márcio F M
2008-03-01
The aim of this randomized, controlled, clinical study was to compare two surgical techniques with the acellular dermal matrix graft (ADMG) to evaluate which technique could provide better root coverage. Fifteen patients with bilateral Miller Class I gingival recession areas were selected. In each patient, one recession area was randomly assigned to the control group, while the contra-lateral recession area was assigned to the test group. The ADMG was used in both groups. The control group was treated with a broader flap and vertical-releasing incisions, and the test group was treated with the proposed surgical technique, without releasing incisions. The clinical parameters evaluated before the surgeries and after 12 months were: gingival recession height, probing depth, relative clinical attachment level and the width and thickness of keratinized tissue. There were no statistically significant differences between the groups for all parameters at baseline. After 12 months, there was a statistically significant reduction in recession height in both groups, and there was no statistically significant difference between the techniques with regard to root coverage. Both surgical techniques provided significant reduction in gingival recession height after 12 months, and similar results in relation to root coverage.
NASA Astrophysics Data System (ADS)
Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan
2013-09-01
In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.
Computer programs and documentation
NASA Technical Reports Server (NTRS)
Speed, F. M.; Broadwater, S. L.
1971-01-01
Various statistical tests that were used to check out random number generators are described. A total of twelve different tests were considered, and from these, six were chosen to be used. The frequency test, max t test, run test, lag product test, gap test, and the matrix test are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forrester, Peter J., E-mail: p.forrester@ms.unimelb.edu.au; Thompson, Colin J.
The Golden-Thompson inequality, Tr (e{sup A+B}) ⩽ Tr (e{sup A}e{sup B}) for A, B Hermitian matrices, appeared in independent works by Golden and Thompson published in 1965. Both of these were motivated by considerations in statistical mechanics. In recent years the Golden-Thompson inequality has found applications to random matrix theory. In this article, we detail some historical aspects relating to Thompson's work, giving in particular a hitherto unpublished proof due to Dyson, and correspondence with Pólya. We show too how the 2 × 2 case relates to hyperbolic geometry, and how the original inequality holds true with the trace operation replaced bymore » any unitarily invariant norm. In relation to the random matrix applications, we review its use in the derivation of concentration type lemmas for sums of random matrices due to Ahlswede-Winter, and Oliveira, generalizing various classical results.« less
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Levy Matrices and Financial Covariances
NASA Astrophysics Data System (ADS)
Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail
2003-10-01
In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.
Yi, Ju Won; Kim, Jae Kwang
2015-03-01
The purpose of this study was to evaluate the clinical outcomes of cografting of acellular dermal matrix with autologous split-thickness skin and autologous split-thickness skin graft alone for full-thickness skin defects on the extremities. In this prospective randomized study, 19 consecutive patients with full-thickness skin defects on the extremities following trauma underwent grafting using either cograft of acellular dermal matrix with autologous split-thickness skin graft (nine patients, group A) or autologous split-thickness skin graft alone (10 patients, group B) from June of 2011 to December of 2012. The postoperative evaluations included observation of complications (including graft necrosis, graft detachment, or seroma formation) and Vancouver Scar Scale score. No statistically significant difference was found regarding complications, including graft necrosis, graft detachment, or seroma formation. At week 8, significantly lower Vancouver Scar Scale scores for vascularity, pliability, height, and total score were found in group A compared with group B. At week 12, lower scores for pliability and height and total scores were identified in group A compared with group B. For cases with traumatic full-thickness skin defects on the extremities, a statistically significant better result was achieved with cograft of acellular dermal matrix with autologous split-thickness skin graft than with autologous split-thickness skin graft alone in terms of Vancouver Scar Scale score. Therapeutic, II.
Financial time series: A physics perspective
NASA Astrophysics Data System (ADS)
Gopikrishnan, Parameswaran; Plerou, Vasiliki; Amaral, Luis A. N.; Rosenow, Bernd; Stanley, H. Eugene
2000-06-01
Physicists in the last few years have started applying concepts and methods of statistical physics to understand economic phenomena. The word ``econophysics'' is sometimes used to refer to this work. One reason for this interest is the fact that Economic systems such as financial markets are examples of complex interacting systems for which a huge amount of data exist and it is possible that economic problems viewed from a different perspective might yield new results. This article reviews the results of a few recent phenomenological studies focused on understanding the distinctive statistical properties of financial time series. We discuss three recent results-(i) The probability distribution of stock price fluctuations: Stock price fluctuations occur in all magnitudes, in analogy to earthquakes-from tiny fluctuations to very drastic events, such as market crashes, eg., the crash of October 19th 1987, sometimes referred to as ``Black Monday''. The distribution of price fluctuations decays with a power-law tail well outside the Lévy stable regime and describes fluctuations that differ by as much as 8 orders of magnitude. In addition, this distribution preserves its functional form for fluctuations on time scales that differ by 3 orders of magnitude, from 1 min up to approximately 10 days. (ii) Correlations in financial time series: While price fluctuations themselves have rapidly decaying correlations, the magnitude of fluctuations measured by either the absolute value or the square of the price fluctuations has correlations that decay as a power-law and persist for several months. (iii) Correlations among different companies: The third result bears on the application of random matrix theory to understand the correlations among price fluctuations of any two different stocks. From a study of the eigenvalue statistics of the cross-correlation matrix constructed from price fluctuations of the leading 1000 stocks, we find that the largest 5-10% of the eigenvalues and the corresponding eigenvectors show systematic deviations from the predictions for a random matrix, whereas the rest of the eigenvalues conform to random matrix behavior-suggesting that these 5-10% of the eigenvalues contain system-specific information about correlated behavior. .
NASA Astrophysics Data System (ADS)
Wirtz, Tim; Kieburg, Mario; Guhr, Thomas
2017-06-01
The correlated Wishart model provides the standard benchmark when analyzing time series of any kind. Unfortunately, the real case, which is the most relevant one in applications, poses serious challenges for analytical calculations. Often these challenges are due to square root singularities which cannot be handled using common random matrix techniques. We present a new way to tackle this issue. Using supersymmetry, we carry out an anlaytical study which we support by numerical simulations. For large but finite matrix dimensions, we show that statistical properties of the fully correlated real Wishart model generically approach those of a correlated real Wishart model with doubled matrix dimensions and doubly degenerate empirical eigenvalues. This holds for the local and global spectral statistics. With Monte Carlo simulations we show that this is even approximately true for small matrix dimensions. We explicitly investigate the k-point correlation function as well as the distribution of the largest eigenvalue for which we find a surprisingly compact formula in the doubly degenerate case. Moreover we show that on the local scale the k-point correlation function exhibits the sine and the Airy kernel in the bulk and at the soft edges, respectively. We also address the positions and the fluctuations of the possible outliers in the data.
Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix
NASA Astrophysics Data System (ADS)
Charnotskii, Mikhail
2017-08-01
Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.
Deformation effect on spectral statistics of nuclei
NASA Astrophysics Data System (ADS)
Sabri, H.; Jalili Majarshin, A.
2018-02-01
In this study, we tried to get significant relations between the spectral statistics of atomic nuclei and their different degrees of deformations. To this aim, the empirical energy levels of 109 even-even nuclei in the 22 ≤ A ≤ 196 mass region are classified as their experimental and calculated quadrupole, octupole, hexadecapole and hexacontatetrapole deformations values and analyzed by random matrix theory. Our results show an obvious relation between the regularity of nuclei and strong quadrupole, hexadecapole and hexacontatetrapole deformations and but for nuclei that their octupole deformations are nonzero, we have observed a GOE-like statistics.
Generation of physical random numbers by using homodyne detection
NASA Astrophysics Data System (ADS)
Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro
2016-10-01
Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint
Conditional random matrix ensembles and the stability of dynamical systems
NASA Astrophysics Data System (ADS)
Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.
2015-08-01
Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.
Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.
Saccenti, Edoardo; Timmerman, Marieke E
2017-03-01
Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
NASA Astrophysics Data System (ADS)
Warchoł, Piotr
2018-06-01
The public transportation system of Cuernavaca, Mexico, exhibits random matrix theory statistics. In particular, the fluctuation of times between the arrival of buses on a given bus stop, follows the Wigner surmise for the Gaussian unitary ensemble. To model this, we propose an agent-based approach in which each bus driver tries to optimize his arrival time to the next stop with respect to an estimated arrival time of his predecessor. We choose a particular form of the associated utility function and recover the appropriate distribution in numerical experiments for a certain value of the only parameter of the model. We then investigate whether this value of the parameter is otherwise distinguished within an information theoretic approach and give numerical evidence that indeed it is associated with a minimum of averaged pairwise mutual information.
Random matrix approach to cross correlations in financial data
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel
2015-06-15
We consider S-matrix correlation functions for a chaotic cavity having M open channels, in the absence of time-reversal invariance. Relying on a semiclassical approximation, we compute the average over E of the quantities Tr[S{sup †}(E − ϵ) S(E + ϵ)]{sup n}, for general positive integer n. Our result is an infinite series in ϵ, whose coefficients are rational functions of M. From this, we extract moments of the time delay matrix Q = − iħS{sup †}dS/dE and check that the first 8 of them agree with the random matrix theory prediction from our previous paper [M. Novaes, J. Math. Phys.more » 56, 062110 (2015)].« less
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Weak scattering of scalar and electromagnetic random fields
NASA Astrophysics Data System (ADS)
Tong, Zhisong
This dissertation encompasses several studies relating to the theory of weak potential scattering of scalar and electromagnetic random, wide-sense statistically stationary fields from various types of deterministic or random linear media. The proposed theory is largely based on the first Born approximation for potential scattering and on the angular spectrum representation of fields. The main focus of the scalar counterpart of the theory is made on calculation of the second-order statistics of scattered light fields in cases when the scattering medium consists of several types of discrete particles with deterministic or random potentials. It is shown that the knowledge of the correlation properties for the particles of the same and different types, described with the newly introduced pair-scattering matrix, is crucial for determining the spectral and coherence states of the scattered radiation. The approach based on the pair-scattering matrix is then used for solving an inverse problem of determining the location of an "alien" particle within the scattering collection of "normal" particles, from several measurements of the spectral density of scattered light. Weak scalar scattering of light from a particulate medium in the presence of optical turbulence existing between the scattering centers is then approached using the combination of the Born's theory for treating the light interaction with discrete particles and the Rytov's theory for light propagation in extended turbulent medium. It is demonstrated how the statistics of scattered radiation depend on scattering potentials of particles and the power spectra of the refractive index fluctuations of turbulence. This theory is of utmost importance for applications involving atmospheric and oceanic light transmission. The second part of the dissertation includes the theoretical procedure developed for predicting the second-order statistics of the electromagnetic random fields, such as polarization and linear momentum, scattered from static media. The spatial distribution of these properties of scattered fields is shown to be substantially dependent on the correlation and polarization properties of incident fields and on the statistics of the refractive index distribution within the scatterers. Further, an example is considered which illustrates the usefulness of the electromagnetic scattering theory of random fields in the case when the scattering medium is a thin bio-tissue layer with the prescribed power spectrum of the refractive index fluctuations. The polarization state of the scattered light is shown to be influenced by correlation and polarization states of the illumination as well as by the particle size distribution of the tissue slice.
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole
NASA Astrophysics Data System (ADS)
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-01
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
Reino, Danilo Maeda; Maia, Luciana Prado; Fernandes, Patrícia Garani; Souza, Sergio Luis Scombatti de; Taba Junior, Mario; Palioto, Daniela Bazan; Grisi, Marcio Fermandes de Moraes; Novaes, Arthur Belém
2015-10-01
The aim of this randomized controlled clinical study was to compare the extended flap technique (EFT) with the coronally advanced flap technique (CAF) using a porcine collagen matrix (PCM) for root coverage. Twenty patients with two bilateral gingival recessions, Miller class I or II on non-molar teeth were treated with CAF+PCM (control group) or EFT+PCM (test group). Clinical measurements of probing pocket depth (PPD), clinical attachment level (CAL), recession height (RH), keratinized tissue height (KTH), keratinized mucosa thickness (KMT) were determined at baseline, 3 and 6 months post-surgery. At 6 months, the mean root coverage for test group was 81.89%, and for control group it was 62.80% (p<0.01). The change of recession depth from baseline was statistically significant between test and control groups, with an mean of 2.21 mm gained at the control sites and 2.84 mm gained at the test sites (p=0.02). There were no statistically significant differences for KTH, PPD or CAL comparing the two therapies. The extended flap technique presented better root coverage than the coronally advanced flap technique when PCM was used.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
NASA Astrophysics Data System (ADS)
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
A random matrix approach to language acquisition
NASA Astrophysics Data System (ADS)
Nicolaidis, A.; Kosmidis, Kosmas; Argyrakis, Panos
2009-12-01
Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N~exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity.
Implementation of a quantum random number generator based on the optimal clustering of photocounts
NASA Astrophysics Data System (ADS)
Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.
2017-10-01
To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.
Harris, Randall J
2004-05-01
Obtaining predictable and esthetic root coverage has become important. Unfortunately, there is only a limited amount of information available on the long-term results of root coverage procedures. The goal of this study was to evaluate the short-term and long-term root coverage results obtained with an acellular dermal matrix and a subepithelial graft. An a priori power analysis was done to determine that 25 was an adequate sample size for each group in this study. Twenty-five patients treated with either an acellular dermal matrix or a subepithelial graft for root coverage were included in this study. The short-term (mean 12.3 to 13.2 weeks) and long-term (mean 48.1 to 49.2 months) results were compared. Additionally, various factors were evaluated to determine whether they could affect the results. This study was a retrospective study of patients in a fee-for-service private periodontal practice. The patients were not randomly assigned to treatment groups. The mean root coverages for the short-term acellular dermal matrix (93.4%), short-term subepithelial graft (96.6%), and long-term subepithelial graft (97.0%) were statistically similar. All three were statistically greater than the long-term acellular dermal matrix mean root coverage (65.8%). Similar results were noted in the change in recession. There were smaller probing reductions and less of an increase in keratinized tissue with the acellular dermal matrix than the subepithelial graft. None of the factors evaluated resulted in the acellular dermal graft having a statistically significant better result than the subepithelial graft. However, in long-term cases where multiple defects were treated with an acellular dermal matrix, the mean root coverage (70.8%) was greater than the mean root coverage in long-term cases where a single defect was treated with an acellular dermal matrix (50.0%). The mean results with the subepithelial graft held up with time better than the mean results with an acellular dermal matrix. However, the results were not universal. In 32.0% of the cases treated with an acellular dermal matrix, the results improved or remained stable with time.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Random pure states: Quantifying bipartite entanglement beyond the linear statistics.
Vivo, Pierpaolo; Pato, Mauricio P; Oshanin, Gleb
2016-05-01
We analyze the properties of entangled random pure states of a quantum system partitioned into two smaller subsystems of dimensions N and M. Framing the problem in terms of random matrices with a fixed-trace constraint, we establish, for arbitrary N≤M, a general relation between the n-point densities and the cross moments of the eigenvalues of the reduced density matrix, i.e., the so-called Schmidt eigenvalues, and the analogous functionals of the eigenvalues of the Wishart-Laguerre ensemble of the random matrix theory. This allows us to derive explicit expressions for two-level densities, and also an exact expression for the variance of von Neumann entropy at finite N,M. Then, we focus on the moments E{K^{a}} of the Schmidt number K, the reciprocal of the purity. This is a random variable supported on [1,N], which quantifies the number of degrees of freedom effectively contributing to the entanglement. We derive a wealth of analytical results for E{K^{a}} for N=2 and 3 and arbitrary M, and also for square N=M systems by spotting for the latter a connection with the probability P(x_{min}^{GUE}≥sqrt[2N]ξ) that the smallest eigenvalue x_{min}^{GUE} of an N×N matrix belonging to the Gaussian unitary ensemble is larger than sqrt[2N]ξ. As a by-product, we present an exact asymptotic expansion for P(x_{min}^{GUE}≥sqrt[2N]ξ) for finite N as ξ→∞. Our results are corroborated by numerical simulations whenever possible, with excellent agreement.
Bootstrapping on Undirected Binary Networks Via Statistical Mechanics
NASA Astrophysics Data System (ADS)
Fushing, Hsieh; Chen, Chen; Liu, Shan-Yu; Koehl, Patrice
2014-09-01
We propose a new method inspired from statistical mechanics for extracting geometric information from undirected binary networks and generating random networks that conform to this geometry. In this method an undirected binary network is perceived as a thermodynamic system with a collection of permuted adjacency matrices as its states. The task of extracting information from the network is then reformulated as a discrete combinatorial optimization problem of searching for its ground state. To solve this problem, we apply multiple ensembles of temperature regulated Markov chains to establish an ultrametric geometry on the network. This geometry is equipped with a tree hierarchy that captures the multiscale community structure of the network. We translate this geometry into a Parisi adjacency matrix, which has a relative low energy level and is in the vicinity of the ground state. The Parisi adjacency matrix is then further optimized by making block permutations subject to the ultrametric geometry. The optimal matrix corresponds to the macrostate of the original network. An ensemble of random networks is then generated such that each of these networks conforms to this macrostate; the corresponding algorithm also provides an estimate of the size of this ensemble. By repeating this procedure at different scales of the ultrametric geometry of the network, it is possible to compute its evolution entropy, i.e. to estimate the evolution of its complexity as we move from a coarse to a fine description of its geometric structure. We demonstrate the performance of this method on simulated as well as real data networks.
Asymmetric correlation matrices: an analysis of financial data
NASA Astrophysics Data System (ADS)
Livan, G.; Rebecchi, L.
2012-06-01
We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.
Embedded random matrix ensembles from nuclear structure and their recent applications
NASA Astrophysics Data System (ADS)
Kota, V. K. B.; Chavda, N. D.
Embedded random matrix ensembles generated by random interactions (of low body rank and usually two-body) in the presence of a one-body mean field, introduced in nuclear structure physics, are now established to be indispensable in describing statistical properties of a large number of isolated finite quantum many-particle systems. Lie algebra symmetries of the interactions, as identified from nuclear shell model and the interacting boson model, led to the introduction of a variety of embedded ensembles (EEs). These ensembles with a mean field and chaos generating two-body interaction generate in three different stages, delocalization of wave functions in the Fock space of the mean-field basis states. The last stage corresponds to what one may call thermalization and complex nuclei, as seen from many shell model calculations, lie in this region. Besides briefly describing them, their recent applications to nuclear structure are presented and they are (i) nuclear level densities with interactions; (ii) orbit occupancies; (iii) neutrinoless double beta decay nuclear transition matrix elements as transition strengths. In addition, their applications are also presented briefly that go beyond nuclear structure and they are (i) fidelity, decoherence, entanglement and thermalization in isolated finite quantum systems with interactions; (ii) quantum transport in disordered networks connected by many-body interactions with centrosymmetry; (iii) semicircle to Gaussian transition in eigenvalue densities with k-body random interactions and its relation to the Sachdev-Ye-Kitaev (SYK) model for majorana fermions.
To cut or not to cut? Assessing the modular structure of brain networks.
Chang, Yu-Teng; Pantazis, Dimitrios; Leahy, Richard M
2014-05-01
A wealth of methods has been developed to identify natural divisions of brain networks into groups or modules, with one of the most prominent being modularity. Compared with the popularity of methods to detect community structure, only a few methods exist to statistically control for spurious modules, relying almost exclusively on resampling techniques. It is well known that even random networks can exhibit high modularity because of incidental concentration of edges, even though they have no underlying organizational structure. Consequently, interpretation of community structure is confounded by the lack of principled and computationally tractable approaches to statistically control for spurious modules. In this paper we show that the modularity of random networks follows a transformed version of the Tracy-Widom distribution, providing for the first time a link between module detection and random matrix theory. We compute parametric formulas for the distribution of modularity for random networks as a function of network size and edge variance, and show that we can efficiently control for false positives in brain and other real-world networks. Copyright © 2014 Elsevier Inc. All rights reserved.
Alves, Luciana B; Costa, Priscila P; Scombatti de Souza, Sérgio Luís; de Moraes Grisi, Márcio F; Palioto, Daniela B; Taba, Mario; Novaes, Arthur B
2012-04-01
The aim of this randomized controlled clinical study was to compare the use of an acellular dermal matrix graft (ADMG) with or without the enamel matrix derivative (EMD) in smokers to evaluate which procedure would provide better root coverage. Nineteen smokers with bilateral Miller Class I or II gingival recessions ≥3 mm were selected. The test group was treated with an association of ADMG and EMD, and the control group with ADMG alone. Probing depth, relative clinical attachment level, gingival recession height, gingival recession width, keratinized tissue width and keratinized tissue thickness were evaluated before the surgeries and after 6 months. Wilcoxon test was used for the statistical analysis at significance level of 5%. No significant differences were found between groups in all parameters at baseline. The mean gain recession height between baseline and 6 months and the complete root coverage favored the test group (p = 0.042, p = 0.019 respectively). Smoking may negatively affect the results achieved through periodontal plastic procedures; however, the association of ADMG and EMD is beneficial in the root coverage of gingival recessions in smokers, 6 months after the surgery. © 2012 John Wiley & Sons A/S.
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.
Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei
2018-01-05
Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.
NASA Astrophysics Data System (ADS)
Ebrahimi, R.; Zohren, S.
2018-03-01
In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.
Photon Localization and Dicke Superradiance in Atomic Gases
NASA Astrophysics Data System (ADS)
Akkermans, E.; Gero, A.; Kaiser, R.
2008-09-01
Photon propagation in a gas of N atoms is studied using an effective Hamiltonian describing photon-mediated atomic dipolar interactions. The density P(Γ) of photon escape rates is determined from the spectrum of the N×N random matrix Γij=sin(xij)/xij, where xij is the dimensionless random distance between any two atoms. Varying disorder and system size, a scaling behavior is observed for the escape rates. It is explained using microscopic calculations and a stochastic model which emphasizes the role of cooperative effects in photon localization and provides an interesting relation with statistical properties of “small world networks.”
Brittberg, Mats; Recker, David; Ilgenfritz, John; Saris, Daniel B F
2018-05-01
Matrix-based cell therapy improves surgical handling, increases patient comfort, and allows for expanded indications with better reliability within the knee joint. Five-year efficacy and safety of autologous cultured chondrocytes on porcine collagen membrane (MACI) versus microfracture for treating cartilage defects have not yet been reported from any randomized controlled clinical trial. To examine the clinical efficacy and safety results at 5 years after treatment with MACI and compare these with the efficacy and safety of microfracture treatment for symptomatic cartilage defects of the knee. Randomized controlled trial; Level of evidence, 1. This article describes the 5-year follow-up of the SUMMIT (Superiority of MACI Implant Versus Microfracture Treatment) clinical trial conducted at 14 study sites in Europe. All 144 patients who participated in SUMMIT were eligible to enroll; analyses of the 5-year data were performed with data from patients who signed informed consent and continued in the Extension study. Of the 144 patients randomized in the SUMMIT trial, 128 signed informed consent and continued observation in the Extension study: 65 MACI (90.3%) and 63 microfracture (87.5%). The improvements in Knee injury and Osteoarthritis Outcome Score (KOOS) Pain and Function domains previously described were maintained over the 5-year follow-up. Five years after treatment, the improvement in MACI over microfracture in the co-primary endpoint of KOOS pain and function was maintained and was clinically and statistically significant ( P = .022). Improvements in activities of daily living remained statistically significantly better ( P = .007) in MACI patients, with quality of life and other symptoms remaining numerically higher in MACI patients but losing statistical significance relative to the results of the SUMMIT 2-year analysis. Magnetic resonance imaging (MRI) evaluation of structural repair was performed in 120 patients at year 5. As in the 2-year SUMMIT (MACI00206) results, the MRI evaluation showed improvement in defect filling for both treatments; however, no statistically significant differences were noted between treatment groups. Symptomatic cartilage knee defects 3 cm 2 or larger treated with MACI were clinically and statistically significantly improved at 5 years compared with microfracture treatment. No remarkable adverse events or safety issues were noted in this heterogeneous patient population.
Felipe, Maria Emília M C; Andrade, Patrícia F; Grisi, Marcio F M; Souza, Sérgio L S; Taba, Mário; Palioto, Daniela B; Novaes, Arthur B
2007-07-01
The aim of this randomized, controlled, clinical investigation was to compare two surgical techniques for root coverage with the acellular dermal matrix graft to evaluate which technique provided better root coverage, a better esthetic result, and less postoperative discomfort. Fifteen patients with bilateral Miller Class I or II gingival recessions were selected. Fifteen pairs of recessions were treated and assigned randomly to the test group, and the contralateral recessions were assigned to the control group. The control group was treated with a broader flap and vertical releasing incisions; the test group was treated with the proposed surgical technique, without vertical releasing incisions. The clinical parameters evaluated were probing depth, relative clinical attachment level, gingival recession (GR), width of keratinized tissue, thickness of keratinized tissue, esthetic result, and pain evaluation. The measurements were taken before the surgeries and after 6 months. At baseline, all parameters were similar for both groups. At 6 months, a statistically significant greater reduction in GR favored the control group. The percentage of root coverage was 68.98% and 84.81% for the test and control groups, respectively. The esthetic result was equivalent between the groups, and all patients tolerated both procedures well. Both techniques provided significant root coverage, good esthetic results, and similar levels of postoperative discomfort. However, the control technique had statistically significantly better results for root coverage of localized gingival recessions.
Fractal planetary rings: Energy inequalities and random field model
NASA Astrophysics Data System (ADS)
Malyarenko, Anatoliy; Ostoja-Starzewski, Martin
2017-12-01
This study is motivated by a recent observation, based on photographs from the Cassini mission, that Saturn’s rings have a fractal structure in radial direction. Accordingly, two questions are considered: (1) What Newtonian mechanics argument in support of such a fractal structure of planetary rings is possible? (2) What kinematics model of such fractal rings can be formulated? Both challenges are based on taking planetary rings’ spatial structure as being statistically stationary in time and statistically isotropic in space, but statistically nonstationary in space. An answer to the first challenge is given through an energy analysis of circular rings having a self-generated, noninteger-dimensional mass distribution [V. E. Tarasov, Int. J. Mod Phys. B 19, 4103 (2005)]. The second issue is approached by taking the random field of angular velocity vector of a rotating particle of the ring as a random section of a special vector bundle. Using the theory of group representations, we prove that such a field is completely determined by a sequence of continuous positive-definite matrix-valued functions defined on the Cartesian square F2 of the radial cross-section F of the rings, where F is a fat fractal.
Morozov, Andrey K; Colosi, John A
2017-09-01
Underwater sound scattering by a rough sea surface, ice, or a rough elastic bottom is studied. The study includes both the scattering from the rough boundary and the elastic effects in the solid layer. A coupled mode matrix is approximated by a linear function of one random perturbation parameter such as the ice-thickness or a perturbation of the surface position. A full two-way coupled mode solution is used to derive the stochastic differential equation for the second order statistics in a Markov approximation.
NASA Astrophysics Data System (ADS)
Abid, Najmul; Mirkhalaf, Mohammad; Barthelat, Francois
2018-03-01
Natural materials such as nacre, collagen, and spider silk are composed of staggered stiff and strong inclusions in a softer matrix. This type of hybrid microstructure results in remarkable combinations of stiffness, strength, and toughness and it now inspires novel classes of high-performance composites. However, the analytical and numerical approaches used to predict and optimize the mechanics of staggered composites often neglect statistical variations and inhomogeneities, which may have significant impacts on modulus, strength, and toughness. Here we present an analysis of localization using small representative volume elements (RVEs) and large scale statistical volume elements (SVEs) based on the discrete element method (DEM). DEM is an efficient numerical method which enabled the evaluation of more than 10,000 microstructures in this study, each including about 5,000 inclusions. The models explore the combined effects of statistics, inclusion arrangement, and interface properties. We find that statistical variations have a negative effect on all properties, in particular on the ductility and energy absorption because randomness precipitates the localization of deformations. However, the results also show that the negative effects of random microstructures can be offset by interfaces with large strain at failure accompanied by strain hardening. More specifically, this quantitative study reveals an optimal range of interface properties where the interfaces are the most effective at delaying localization. These findings show how carefully designed interfaces in bioinspired staggered composites can offset the negative effects of microstructural randomness, which is inherent to most current fabrication methods.
Objective assessment of image quality. IV. Application to adaptive optics
Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, Christopher
2008-01-01
The methodology of objective assessment, which defines image quality in terms of the performance of specific observers on specific tasks of interest, is extended to temporal sequences of images with random point spread functions and applied to adaptive imaging in astronomy. The tasks considered include both detection and estimation, and the observers are the optimal linear discriminant (Hotelling observer) and the optimal linear estimator (Wiener). A general theory of first- and second-order spatiotemporal statistics in adaptive optics is developed. It is shown that the covariance matrix can be rigorously decomposed into three terms representing the effect of measurement noise, random point spread function, and random nature of the astronomical scene. Figures of merit are developed, and computational methods are discussed. PMID:17106464
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D , observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄ . When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model.
Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J.
2017-01-01
Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D, observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄. When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model. PMID:28989561
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
Xu, Zhi-ling; Wang, Qiang; Liu, Tian-lin; Guo, Li-ying; Jing, Feng-qiu; Liu, Hui
2006-04-01
To investigate the changes of bone sialoprotein (BSP) in developing dental tissues of rats exposed to fluoride. Twenty rats were randomly divided into two groups, one was with distilled water (control group), the other was with distilled water treated by fluoride (experimental group). When the fluorosis model was established, the changes of the expression of BSP were investigated and compared between the two groups. HE staining was used to observe the morphology of the cell, and immunohistochemisty assay was used to determine the expression of BSP in rat incisor. Student's t test was used for statistical analysis. The ameloblasts had normal morphology and arranged orderly. Immunoreactivitis of BSP was present in matured ameloblasts, dentinoblasts, cementoblasts, and the matrix in the control group. But in the experimental group the ameloblasts arranged in multiple layers, the enamel matrix was confused and the expression of BSP was significantly lower than that of the control group. Statistical analysis showed significant differences between the two groups (P<0.01). Fluoride can inhibit the expression of BSP in developing dental tissues of rats, and then inhibit differentiation of the tooth epithelial cells and secretion of matrix. This is a probable intracellular mechanism of dental fluorosis.
Fluctuation-dissipation theory of input-output interindustrial relations
NASA Astrophysics Data System (ADS)
Iyetomi, Hiroshi; Nakayama, Yasuhiro; Aoyama, Hideaki; Fujiwara, Yoshi; Ikeda, Yuichi; Souma, Wataru
2011-01-01
In this study, the fluctuation-dissipation theory is invoked to shed light on input-output interindustrial relations at a macroscopic level by its application to indices of industrial production (IIP) data for Japan. Statistical noise arising from finiteness of the time series data is carefully removed by making use of the random matrix theory in an eigenvalue analysis of the correlation matrix; as a result, two dominant eigenmodes are detected. Our previous study successfully used these two modes to demonstrate the existence of intrinsic business cycles. Here a correlation matrix constructed from the two modes describes genuine interindustrial correlations in a statistically meaningful way. Furthermore, it enables us to quantitatively discuss the relationship between shipments of final demand goods and production of intermediate goods in a linear response framework. We also investigate distinctive external stimuli for the Japanese economy exerted by the current global economic crisis. These stimuli are derived from residuals of moving-average fluctuations of the IIP remaining after subtracting the long-period components arising from inherent business cycles. The observation reveals that the fluctuation-dissipation theory is applicable to an economic system that is supposed to be far from physical equilibrium.
A Robust Statistics Approach to Minimum Variance Portfolio Optimization
NASA Astrophysics Data System (ADS)
Yang, Liusha; Couillet, Romain; McKay, Matthew R.
2015-12-01
We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
A multi-assets artificial stock market with zero-intelligence traders
NASA Astrophysics Data System (ADS)
Ponta, L.; Raberto, M.; Cincotti, S.
2011-01-01
In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.
Local Geostatistical Models and Big Data in Hydrological and Ecological Applications
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios
2015-04-01
The advent of the big data era creates new opportunities for environmental and ecological modelling but also presents significant challenges. The availability of remote sensing images and low-cost wireless sensor networks implies that spatiotemporal environmental data to cover larger spatial domains at higher spatial and temporal resolution for longer time windows. Handling such voluminous data presents several technical and scientific challenges. In particular, the geostatistical methods used to process spatiotemporal data need to overcome the dimensionality curse associated with the need to store and invert large covariance matrices. There are various mathematical approaches for addressing the dimensionality problem, including change of basis, dimensionality reduction, hierarchical schemes, and local approximations. We present a Stochastic Local Interaction (SLI) model that can be used to model local correlations in spatial data. SLI is a random field model suitable for data on discrete supports (i.e., regular lattices or irregular sampling grids). The degree of localization is determined by means of kernel functions and appropriate bandwidths. The strength of the correlations is determined by means of coefficients. In the "plain vanilla" version the parameter set involves scale and rigidity coefficients as well as a characteristic length. The latter determines in connection with the rigidity coefficient the correlation length of the random field. The SLI model is based on statistical field theory and extends previous research on Spartan spatial random fields [2,3] from continuum spaces to explicitly discrete supports. The SLI kernel functions employ adaptive bandwidths learned from the sampling spatial distribution [1]. The SLI precision matrix is expressed explicitly in terms of the model parameter and the kernel function. Hence, covariance matrix inversion is not necessary for parameter inference that is based on leave-one-out cross validation. This property helps to overcome a significant computational bottleneck of geostatistical models due to the poor scaling of the matrix inversion [4,5]. We present applications to real and simulated data sets, including the Walker lake data, and we investigate the SLI performance using various statistical cross validation measures. References [1] T. Hofmann, B. Schlkopf, A.J. Smola, Annals of Statistics, 36, 1171-1220 (2008). [2] D. T. Hristopulos, SIAM Journal on Scientific Computing, 24(6): 2125-2162 (2003). [3] D. T. Hristopulos and S. N. Elogne, IEEE Transactions on Signal Processing, 57(9): 3475-3487 (2009) [4] G. Jona Lasinio, G. Mastrantonio, and A. Pollice, Statistical Methods and Applications, 22(1):97-112 (2013) [5] Sun, Y., B. Li, and M. G. Genton (2012). Geostatistics for large datasets. In: Advances and Challenges in Space-time Modelling of Natural Events, Lecture Notes in Statistics, pp. 55-77. Springer, Berlin-Heidelberg.
Random matrix theory for transition strengths: Applications and open questions
NASA Astrophysics Data System (ADS)
Kota, V. K. B.
2017-12-01
Embedded random matrix ensembles are generic models for describing statistical properties of finite isolated interacting quantum many-particle systems. A finite quantum system, induced by a transition operator, makes transitions from its states to the states of the same system or to those of another system. Examples are electromagnetic transitions (then the initial and final systems are same), nuclear beta and double beta decay (then the initial and final systems are different) and so on. Using embedded ensembles (EE), there are efforts to derive a good statistical theory for transition strengths. With m fermions (or bosons) in N mean-field single particle levels and interacting via two-body forces, we have with GOE embedding, the so called EGOE(1+2). Now, the transition strength density (transition strength multiplied by the density of states at the initial and final energies) is a convolution of the density generated by the mean-field one-body part with a bivariate spreading function due to the two-body interaction. Using the embedding U(N) algebra, it is established, for a variety of transition operators, that the spreading function, for sufficiently strong interactions, is close to a bivariate Gaussian. Also, as the interaction strength increases, the spreading function exhibits a transition from bivariate Breit-Wigner to bivariate Gaussian form. In appropriate limits, this EE theory reduces to the polynomial theory of Draayer, French and Wong on one hand and to the theory due to Flambaum and Izrailev for one-body transition operators on the other. Using spin-cutoff factors for projecting angular momentum, the theory is applied to nuclear matrix elements for neutrinoless double beta decay (NDBD). In this paper we will describe: (i) various developments in the EE theory for transition strengths; (ii) results for nuclear matrix elements for 130Te and 136Xe NDBD; (iii) important open questions in the current form of the EE theory.
NASA Astrophysics Data System (ADS)
Duan, Xueyang
The objective of this dissertation is to develop forward scattering models for active microwave remote sensing of natural features represented by layered media with rough interfaces. In particular, soil profiles are considered, for which a model of electromagnetic scattering from multilayer rough surfaces with or without buried random media is constructed. Starting from a single rough surface, radar scattering is modeled using the stabilized extended boundary condition method (SEBCM). This method solves the long-standing instability issue of the classical EBCM, and gives three-dimensional full wave solutions over large ranges of surface roughnesses with higher computational efficiency than pure numerical solutions, e.g., method of moments (MoM). Based on this single surface solution, multilayer rough surface scattering is modeled using the scattering matrix approach and the model is used for a comprehensive sensitivity analysis of the total ground scattering as a function of layer separation, subsurface statistics, and sublayer dielectric properties. The buried inhomogeneities such as rocks and vegetation roots are considered for the first time in the forward scattering model. Radar scattering from buried random media is modeled by the aggregate transition matrix using either the recursive transition matrix approach for spherical or short-length cylindrical scatterers, or the generalized iterative extended boundary condition method we developed for long cylinders or root-like cylindrical clusters. These approaches take the field interactions among scatterers into account with high computational efficiency. The aggregate transition matrix is transformed to a scattering matrix for the full solution to the layered-medium problem. This step is based on the near-to-far field transformation of the numerical plane wave expansion of the spherical harmonics and the multipole expansion of plane waves. This transformation consolidates volume scattering from the buried random medium with the scattering from layered structure in general. Combined with scattering from multilayer rough surfaces, scattering contributions from subsurfaces and vegetation roots can be then simulated. Solutions of both the rough surface scattering and random media scattering are validated numerically, experimentally, or both. The experimental validations have been carried out using a laboratory-based transmit-receive system for scattering from random media and a new bistatic tower-mounted radar system for field-based surface scattering measurements.
Accounting for crustal magnetization in models of the core magnetic field
NASA Technical Reports Server (NTRS)
Jackson, Andrew
1990-01-01
The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Batchelor, Murray T.; Wille, Luc T.
The Table of Contents for the book is as follows: * Preface * Modelling the Immune System - An Example of the Simulation of Complex Biological Systems * Brief Overview of Quantum Computation * Quantal Information in Statistical Physics * Modeling Economic Randomness: Statistical Mechanics of Market Phenomena * Essentially Singular Solutions of Feigenbaum- Type Functional Equations * Spatiotemporal Chaotic Dynamics in Coupled Map Lattices * Approach to Equilibrium of Chaotic Systems * From Level to Level in Brain and Behavior * Linear and Entropic Transformations of the Hydrophobic Free Energy Sequence Help Characterize a Novel Brain Polyprotein: CART's Protein * Dynamical Systems Response to Pulsed High-Frequency Fields * Bose-Einstein Condensates in the Light of Nonlinear Physics * Markov Superposition Expansion for the Entropy and Correlation Functions in Two and Three Dimensions * Calculation of Wave Center Deflection and Multifractal Analysis of Directed Waves Through the Study of su(1,1)Ferromagnets * Spectral Properties and Phases in Hierarchical Master Equations * Universality of the Distribution Functions of Random Matrix Theory * The Universal Chiral Partition Function for Exclusion Statistics * Continuous Space-Time Symmetries in a Lattice Field Theory * Quelques Cas Limites du Problème à N Corps Unidimensionnel * Integrable Models of Correlated Electrons * On the Riemann Surface of the Three-State Chiral Potts Model * Two Exactly Soluble Lattice Models in Three Dimensions * Competition of Ferromagnetic and Antiferromagnetic Order in the Spin-l/2 XXZ Chain at Finite Temperature * Extended Vertex Operator Algebras and Monomial Bases * Parity and Charge Conjugation Symmetries and S Matrix of the XXZ Chain * An Exactly Solvable Constrained XXZ Chain * Integrable Mixed Vertex Models Ftom the Braid-Monoid Algebra * From Yang-Baxter Equations to Dynamical Zeta Functions for Birational Tlansformations * Hexagonal Lattice Directed Site Animals * Direction in the Star-Triangle Relations * A Self-Avoiding Walk Through Exactly Solved Lattice Models in Statistical Mechanics
The Effect of General Statistical Fiber Misalignment on Predicted Damage Initiation in Composites
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.
2014-01-01
A micromechanical method is employed for the prediction of unidirectional composites in which the fiber orientation can possess various statistical misalignment distributions. The method relies on the probability-weighted averaging of the appropriate concentration tensor, which is established by the micromechanical procedure. This approach provides access to the local field quantities throughout the constituents, from which initiation of damage in the composite can be predicted. In contrast, a typical macromechanical procedure can determine the effective composite elastic properties in the presence of statistical fiber misalignment, but cannot provide the local fields. Fully random fiber distribution is presented as a special case using the proposed micromechanical method. Results are given that illustrate the effects of various amounts of fiber misalignment in terms of the standard deviations of in-plane and out-of-plane misalignment angles, where normal distributions have been employed. Damage initiation envelopes, local fields, effective moduli, and strengths are predicted for polymer and ceramic matrix composites with given normal distributions of misalignment angles, as well as fully random fiber orientation.
Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model
NASA Astrophysics Data System (ADS)
Margarint, Vlad
2018-06-01
We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.
Universality for 1d Random Band Matrices: Sigma-Model Approximation
NASA Astrophysics Data System (ADS)
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos
2009-05-01
instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors
Thomas, Libby John; Emmadi, Pamela; Thyagarajan, Ramakrishnan; Namasivayam, Ambalavanan
2013-01-01
Aims: The purpose of this study was to compare the clinical efficacy of subepithelial connective tissue graft and acellular dermal matrix graft associated with coronally repositioned flap in the treatment of Miller's class I and II gingival recession, 6 months postoperatively. Settings and Design: Ten patients with bilateral Miller's class I or class II gingival recession were randomly divided into two groups using a split-mouth study design. Materials and Methods: Group I (10 sites) was treated with subepithelial connective tissue graft along with coronally repositioned flap and Group II (10 sites) treated with acellular dermal matrix graft along with coronally repositioned flap. Clinical parameters like recession height and width, probing pocket depth, clinical attachment level, and width of keratinized gingiva were evaluated at baseline, 90th day, and 180th day for both groups. The percentage of root coverage was calculated based on the comparison of the recession height from 0 to 180th day in both Groups I and II. Statistical Analysis Used: Intragroup parameters at different time points were measured using the Wilcoxon signed rank test and Mann–Whitney U test was employed to analyze the differences between test and control groups. Results: There was no statistically significant difference in recession height and width, gain in CAL, and increase in the width of keratinized gingiva between the two groups on the 180th day. Both procedures showed clinically and statistically significant root coverage (Group I 96%, Group II 89.1%) on the 180th day. Conclusions: The results indicate that coverage of denuded root with both subepithelial connective tissue autograft and acellular dermal matrix allograft are very predictable procedures, which were stable for 6 months postoperatively. PMID:24174728
Increased entropy of signal transduction in the cancer metastasis phenotype.
Teschendorff, Andrew E; Severini, Simone
2010-07-30
The statistical study of biological networks has led to important novel biological insights, such as the presence of hubs and hierarchical modularity. There is also a growing interest in studying the statistical properties of networks in the context of cancer genomics. However, relatively little is known as to what network features differ between the cancer and normal cell physiologies, or between different cancer cell phenotypes. Based on the observation that frequent genomic alterations underlie a more aggressive cancer phenotype, we asked if such an effect could be detectable as an increase in the randomness of local gene expression patterns. Using a breast cancer gene expression data set and a model network of protein interactions we derive constrained weighted networks defined by a stochastic information flux matrix reflecting expression correlations between interacting proteins. Based on this stochastic matrix we propose and compute an entropy measure that quantifies the degree of randomness in the local pattern of information flux around single genes. By comparing the local entropies in the non-metastatic versus metastatic breast cancer networks, we here show that breast cancers that metastasize are characterised by a small yet significant increase in the degree of randomness of local expression patterns. We validate this result in three additional breast cancer expression data sets and demonstrate that local entropy better characterises the metastatic phenotype than other non-entropy based measures. We show that increases in entropy can be used to identify genes and signalling pathways implicated in breast cancer metastasis and provide examples of de-novo discoveries of gene modules with known roles in apoptosis, immune-mediated tumour suppression, cell-cycle and tumour invasion. Importantly, we also identify a novel gene module within the insulin growth factor signalling pathway, alteration of which may predispose the tumour to metastasize. These results demonstrate that a metastatic cancer phenotype is characterised by an increase in the randomness of the local information flux patterns. Measures of local randomness in integrated protein interaction mRNA expression networks may therefore be useful for identifying genes and signalling pathways disrupted in one phenotype relative to another. Further exploration of the statistical properties of such integrated cancer expression and protein interaction networks will be a fruitful endeavour.
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
de Souza, Sérgio Luís Scombatti; Novaes, Arthur Belém; Grisi, Daniela Corrêa; Taba, Mário; Grisi, Márcio Fernando de Moraes; de Andrade, Patrícia Freitas
2008-07-01
Different techniques have been proposed for the treatment of gingival recession. This study compared the clinical results of gingival recession treatment using a subepithelial connective tissue graft and an acellular dermal matrix allograft. Seven patients with bilateral Miller class I or II gingival recession were selected. Twenty-six recessions were treated and randomly assigned to the test group. In each case the contralateral recession was assigned to the control group. In the control group, a connective tissue graft in combination with a coronally positioned flap was used; in the test group, an acellular dermal matrix allograft was used as a substitute for palatal donor tissue. Probing depth, clinical attachment level, gingival recession, and width of keratinized tissue were measured two weeks prior to surgery and at six and 12 months post-surgery. There were no statistically significant differences between the groups in terms of recession reduction, clinical attachment gain, probing pocket depth, and increase in the width of the keratinized tissue after six or 12 months. There was no statistically significant increase in the width of keratinized tissue between six and 12 months for either group. Within the limitations of this study, it can be suggested that the acellular dermal matrix allograft may be a substitute for palatal donor tissue in root coverage procedures and that the time required for additional gain in the amount of keratinized tissue may be greater for the acellular dermal matrix than for the connective tissue procedures.
3D shape recovery from image focus using gray level co-occurrence matrix
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid
2018-04-01
Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.
Automatic Trading Agent. RMT Based Portfolio Theory and Portfolio Selection
NASA Astrophysics Data System (ADS)
Snarska, M.; Krzych, J.
2006-11-01
Portfolio theory is a very powerful tool in the modern investment theory. It is helpful in estimating risk of an investor's portfolio, arosen from lack of information, uncertainty and incomplete knowledge of reality, which forbids a perfect prediction of future price changes. Despite of many advantages this tool is not known and not widely used among investors on Warsaw Stock Exchange. The main reason for abandoning this method is a high level of complexity and immense calculations. The aim of this paper is to introduce an automatic decision-making system, which allows a single investor to use complex methods of Modern Portfolio Theory (MPT). The key tool in MPT is an analysis of an empirical covariance matrix. This matrix, obtained from historical data, biased by such a high amount of statistical uncertainty, that it can be seen as random. By bringing into practice the ideas of Random Matrix Theory (RMT), the noise is removed or significantly reduced, so the future risk and return are better estimated and controlled. These concepts are applied to the Warsaw Stock Exchange Simulator {http://gra.onet.pl}. The result of the simulation is 18% level of gains in comparison with respective 10% loss of the Warsaw Stock Exchange main index WIG.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
NASA Astrophysics Data System (ADS)
Wang, Rong; Wang, Li; Yang, Yong; Li, Jiajia; Wu, Ying; Lin, Pan
2016-11-01
Attention deficit hyperactivity disorder (ADHD) is the most common childhood neuropsychiatric disorder and affects approximately 6 -7 % of children worldwide. Here, we investigate the statistical properties of undirected and directed brain functional networks in ADHD patients based on random matrix theory (RMT), in which the undirected functional connectivity is constructed based on correlation coefficient and the directed functional connectivity is measured based on cross-correlation coefficient and mutual information. We first analyze the functional connectivity and the eigenvalues of the brain functional network. We find that ADHD patients have increased undirected functional connectivity, reflecting a higher degree of linear dependence between regions, and increased directed functional connectivity, indicating stronger causality and more transmission of information among brain regions. More importantly, we explore the randomness of the undirected and directed functional networks using RMT. We find that for ADHD patients, the undirected functional network is more orderly than that for normal subjects, which indicates an abnormal increase in undirected functional connectivity. In addition, we find that the directed functional networks are more random, which reveals greater disorder in causality and more chaotic information flow among brain regions in ADHD patients. Our results not only further confirm the efficacy of RMT in characterizing the intrinsic properties of brain functional networks but also provide insights into the possibilities RMT offers for improving clinical diagnoses and treatment evaluations for ADHD patients.
NASA Astrophysics Data System (ADS)
Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.
2008-02-01
The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.
Chaos and random matrices in supersymmetric SYK
NASA Astrophysics Data System (ADS)
Hunter-Jones, Nicholas; Liu, Junyu
2018-05-01
We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.
A Statistical Test of Walrasian Equilibrium by Means of Complex Networks Theory
NASA Astrophysics Data System (ADS)
Bargigli, Leonardo; Viaggiu, Stefano; Lionetto, Andrea
2016-10-01
We represent an exchange economy in terms of statistical ensembles for complex networks by introducing the concept of market configuration. This is defined as a sequence of nonnegative discrete random variables {w_{ij}} describing the flow of a given commodity from agent i to agent j. This sequence can be arranged in a nonnegative matrix W which we can regard as the representation of a weighted and directed network or digraph G. Our main result consists in showing that general equilibrium theory imposes highly restrictive conditions upon market configurations, which are in most cases not fulfilled by real markets. An explicit example with reference to the e-MID interbank credit market is provided.
Lorenzo, Ramón; García, Virginia; Orsini, Marco; Martin, Conchita; Sanz, Mariano
2012-03-01
The aim of this controlled randomized clinical trial was to evaluate the efficacy of a xenogeneic collagen matrix (CM) to augment the keratinized tissue around implants supporting prosthetic restorations at 6 months when compared with the standard treatment, the connective tissue autograft, CTG). This randomized longitudinal parallel controlled clinical trial studied 24 patients with at least one location with minimal keratinized tissue (≤1 mm). The 6-month width of keratinized tissue. As secondary outcomes the esthetic outlook, the maintenance of peri-implant mucosal health and the patient morbidity were assessed pre-operatively and 1, 3, and 6 months post-operatively. At 6 months, Group CTG attained a mean width of keratinized tissue of 2.75 (1.5) mm, while the corresponding figure in Group CM was 2.8 (0.4) mm, the inter-group differences not being statistically significant. The surgical procedure in both groups did not alter significantly the mucosal health in the affected abutments. There was a similar esthetic result and significant increase in the vestibular depth in both groups as a result of the surgery. In the CM group it changed from 2.2 (3.3) to 5.1 (2.5) mm at 6 months. The patients treated with the CM referred less pain, needed less pain medication, and the surgical time was shorter, although these differences were not statistically significant when compared with the CTG group. These results prove that this new CM was as effective and predictable as the CTG for attaining a band of keratinized tissue. © 2011 John Wiley & Sons A/S.
Random matrix ensembles for many-body quantum systems
NASA Astrophysics Data System (ADS)
Vyas, Manan; Seligman, Thomas H.
2018-04-01
Classical random matrix ensembles were originally introduced in physics to approximate quantum many-particle nuclear interactions. However, there exists a plethora of quantum systems whose dynamics is explained in terms of few-particle (predom-inantly two-particle) interactions. The random matrix models incorporating the few-particle nature of interactions are known as embedded random matrix ensembles. In the present paper, we provide a brief overview of these two ensembles and illustrate how the embedded ensembles can be successfully used to study decoherence of a qubit interacting with an environment, both for fermionic and bosonic embedded ensembles. Numerical calculations show the dependence of decoherence on the nature of the environment.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Shahsavari, Rouzbeh
2016-11-01
As the most widely used manufactured material on Earth, concrete poses serious societal and environmental concerns which call for innovative strategies to develop greener concrete with improved strength and toughness, properties that are exclusive in man-made materials. Herein, we focus on calcium silicate hydrate (C-S-H), the major binding phase of all Portland cement concretes, and study how engineering its nanovoids and portlandite particle inclusions can impart a balance of strength, toughness and stiffness. By performing an extensive +600 molecular dynamics simulations coupled with statistical analysis tools, our results provide new evidence of ductile fracture mechanisms in C-S-H - reminiscent of crystalline alloys and ductile metals - decoding the interplay between the crack growth, nanovoid/particle inclusions, and stoichiometry, which dictates the crystalline versus amorphous nature of the underlying matrix. We found that introduction of voids and portlandite particles can significantly increase toughness and ductility, specially in C-S-H with more amorphous matrices, mainly owing to competing mechanisms of crack deflection, voids coalescence, internal necking, accommodation, and geometry alteration of individual voids/particles, which together regulate toughness versus strength. Furthermore, utilizing a comprehensive global sensitivity analysis on random configuration-property relations, we show that the mean diameter of voids/particles is the most critical statistical parameter influencing the mechanical properties of C-S-H, irrespective of stoichiometry or crystalline or amorphous nature of the matrix. This study provides new fundamental insights, design guidelines, and de novo strategies to turn the brittle C-S-H into a ductile material, impacting modern engineering of strong and tough concrete infrastructures and potentially other complex brittle materials.
NASA Astrophysics Data System (ADS)
Zhao, Yan; Stratt, Richard M.
2018-05-01
Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.
Statistical Approaches to Adjusting Weights for Dependent Arms in Network Meta-analysis.
Su, Yu-Xuan; Tu, Yu-Kang
2018-05-22
Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only one treatment. However, some trials use within person designs such as split-body, split-mouth and cross-over designs, where each patient may receive more than one treatment. Data from treatment arms within these trials are no longer independent, so the correlations between dependent arms need to be accounted for within the statistical analyses. Ignoring these correlations may result in incorrect conclusions. The main objective of this study is to develop statistical approaches to adjusting weights for dependent arms within special design trials. In this study, we demonstrate the following three approaches: the data augmentation approach, the adjusting variance approach, and the reducing weight approach. These three methods could be perfectly applied in current statistic tools such as R and STATA. An example of periodontal regeneration was used to demonstrate how these approaches could be undertaken and implemented within statistical software packages, and to compare results from different approaches. The adjusting variance approach can be implemented within the network package in STATA, while reducing weight approach requires computer software programming to set up the within-study variance-covariance matrix. This article is protected by copyright. All rights reserved.
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
Characteristics of level-spacing statistics in chaotic graphene billiards.
Huang, Liang; Lai, Ying-Cheng; Grebogi, Celso
2011-03-01
A fundamental result in nonrelativistic quantum nonlinear dynamics is that the spectral statistics of quantum systems that possess no geometric symmetry, but whose classical dynamics are chaotic, are described by those of the Gaussian orthogonal ensemble (GOE) or the Gaussian unitary ensemble (GUE), in the presence or absence of time-reversal symmetry, respectively. For massless spin-half particles such as neutrinos in relativistic quantum mechanics in a chaotic billiard, the seminal work of Berry and Mondragon established the GUE nature of the level-spacing statistics, due to the combination of the chirality of Dirac particles and the confinement, which breaks the time-reversal symmetry. A question is whether the GOE or the GUE statistics can be observed in experimentally accessible, relativistic quantum systems. We demonstrate, using graphene confinements in which the quasiparticle motions are governed by the Dirac equation in the low-energy regime, that the level-spacing statistics are persistently those of GOE random matrices. We present extensive numerical evidence obtained from the tight-binding approach and a physical explanation for the GOE statistics. We also find that the presence of a weak magnetic field switches the statistics to those of GUE. For a strong magnetic field, Landau levels become influential, causing the level-spacing distribution to deviate markedly from the random-matrix predictions. Issues addressed also include the effects of a number of realistic factors on level-spacing statistics such as next nearest-neighbor interactions, different lattice orientations, enhanced hopping energy for atoms on the boundary, and staggered potential due to graphene-substrate interactions.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.
NASA Astrophysics Data System (ADS)
Bouhaj, M.; von Estorff, O.; Peiffer, A.
2017-09-01
In the application of Statistical Energy Analysis "SEA" to complex assembled structures, a purely predictive model often exhibits errors. These errors are mainly due to a lack of accurate modelling of the power transmission mechanism described through the Coupling Loss Factors (CLF). Experimental SEA (ESEA) is practically used by the automotive and aerospace industry to verify and update the model or to derive the CLFs for use in an SEA predictive model when analytical estimates cannot be made. This work is particularly motivated by the lack of procedures that allow an estimate to be made of the variance and confidence intervals of the statistical quantities when using the ESEA technique. The aim of this paper is to introduce procedures enabling a statistical description of measured power input, vibration energies and the derived SEA parameters. Particular emphasis is placed on the identification of structural CLFs of complex built-up structures comparing different methods. By adopting a Stochastic Energy Model (SEM), the ensemble average in ESEA is also addressed. For this purpose, expressions are obtained to randomly perturb the energy matrix elements and generate individual samples for the Monte Carlo (MC) technique applied to derive the ensemble averaged CLF. From results of ESEA tests conducted on an aircraft fuselage section, the SEM approach provides a better performance of estimated CLFs compared to classical matrix inversion methods. The expected range of CLF values and the synthesized energy are used as quality criteria of the matrix inversion, allowing to assess critical SEA subsystems, which might require a more refined statistical description of the excitation and the response fields. Moreover, the impact of the variance of the normalized vibration energy on uncertainty of the derived CLFs is outlined.
NASA Astrophysics Data System (ADS)
Gudder, Stanley
2008-07-01
A new approach to quantum Markov chains is presented. We first define a transition operation matrix (TOM) as a matrix whose entries are completely positive maps whose column sums form a quantum operation. A quantum Markov chain is defined to be a pair (G,E) where G is a directed graph and E =[Eij] is a TOM whose entry Eij labels the edge from vertex j to vertex i. We think of the vertices of G as sites that a quantum system can occupy and Eij is the transition operation from site j to site i in one time step. The discrete dynamics of the system is obtained by iterating the TOM E. We next consider a special type of TOM called a transition effect matrix. In this case, there are two types of dynamics, a state dynamics and an operator dynamics. Although these two types are not identical, they are statistically equivalent. We next give examples that illustrate various properties of quantum Markov chains. We conclude by showing that our formalism generalizes the usual framework for quantum random walks.
Giant mesoscopic fluctuations of the elastic cotunneling thermopower of a single-electron transistor
NASA Astrophysics Data System (ADS)
Vasenko, A. S.; Basko, D. M.; Hekking, F. W. J.
2015-02-01
We study the thermoelectric transport of a small metallic island weakly coupled to two electrodes by tunnel junctions. In the Coulomb blockade regime, in the case when the ground state of the system corresponds to an even number of electrons on the island, the main mechanism of electron transport at the lowest temperatures is elastic cotunneling. In this regime, the transport coefficients strongly depend on the realization of the random impurity potential or the shape of the island. Using random-matrix theory, we calculate the thermopower and the thermoelectric kinetic coefficient and study the statistics of their mesoscopic fluctuations in the elastic cotunneling regime. The fluctuations of the thermopower turn out to be much larger than the average value.
Mahajan, Ajay; Dixit, Jaya; Verma, Umesh Pratap
2007-12-01
The present randomized controlled trial was conducted to evaluate acellular dermal matrix (ADM) graft in terms of patient satisfaction and its effectiveness and efficiency in the treatment of gingival recession. Fourteen patients (seven males and seven females) with Miller Class I and II recessions > or =3 mm participated in this 6-month clinical study. They were assigned randomly to the ADM group (ADM graft and coronally positioned flap [CPF]) or the CPF group (CPF alone). Results were evaluated based on parameters measuring patient satisfaction and clinical outcomes associated with the two treatment procedures. Significance was set at P <0.05. The mean recession was 4.0 +/- 1.0 mm and 3.7 +/- 0.7 mm for the ADM and CPF groups, respectively. For the ADM group, the defect coverage was 3.85 +/- 0.89 mm or 97.14% compared to the CPF group, in which the defect coverage was 2.85 +/- 0.89 mm or 77.42%. The difference between the two groups was statistically significant (P <0.05). There were no statistically significant differences between the two groups in the remaining clinical parameters and overall patient satisfaction except in criteria related to patient comfort and cost effectiveness, in which CPF alone produced significantly better results (P <0.03). ADM graft is significantly superior with regard to effectiveness and efficiency in the treatment of gingival recession than CPF alone. CPF emerges as a better option than ADM graft in terms of cost effectiveness and patient comfort.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
NASA Astrophysics Data System (ADS)
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Effective Perron-Frobenius eigenvalue for a correlated random map
NASA Astrophysics Data System (ADS)
Pool, Roman R.; Cáceres, Manuel O.
2010-09-01
We investigate the evolution of random positive linear maps with various type of disorder by analytic perturbation and direct simulation. Our theoretical result indicates that the statistics of a random linear map can be successfully described for long time by the mean-value vector state. The growth rate can be characterized by an effective Perron-Frobenius eigenvalue that strongly depends on the type of correlation between the elements of the projection matrix. We apply this approach to an age-structured population dynamics model. We show that the asymptotic mean-value vector state characterizes the population growth rate when the age-structured model has random vital parameters. In this case our approach reveals the nontrivial dependence of the effective growth rate with cross correlations. The problem was reduced to the calculation of the smallest positive root of a secular polynomial, which can be obtained by perturbations in terms of Green’s function diagrammatic technique built with noncommutative cumulants for arbitrary n -point correlations.
Fast Kalman Filter for Random Walk Forecast model
NASA Astrophysics Data System (ADS)
Saibaba, A.; Kitanidis, P. K.
2013-12-01
Kalman filtering is a fundamental tool in statistical time series analysis to understand the dynamics of large systems for which limited, noisy observations are available. However, standard implementations of the Kalman filter are prohibitive because they require O(N^2) in memory and O(N^3) in computational cost, where N is the dimension of the state variable. In this work, we focus our attention on the Random walk forecast model which assumes the state transition matrix to be the identity matrix. This model is frequently adopted when the data is acquired at a timescale that is faster than the dynamics of the state variables and there is considerable uncertainty as to the physics governing the state evolution. We derive an efficient representation for the a priori and a posteriori estimate covariance matrices as a weighted sum of two contributions - the process noise covariance matrix and a low rank term which contains eigenvectors from a generalized eigenvalue problem, which combines information from the noise covariance matrix and the data. We describe an efficient algorithm to update the weights of the above terms and the computation of eigenmodes of the generalized eigenvalue problem (GEP). The resulting algorithm for the Kalman filter with Random walk forecast model scales as O(N) or O(N log N), both in memory and computational cost. This opens up the possibility of real-time adaptive experimental design and optimal control in systems of much larger dimension than was previously feasible. For a small number of measurements (~ 300 - 400), this procedure can be made numerically exact. However, as the number of measurements increase, for several choices of measurement operators and noise covariance matrices, the spectrum of the (GEP) decays rapidly and we are justified in only retaining the dominant eigenmodes. We discuss tradeoffs between accuracy and computational cost. The resulting algorithms are applied to an example application from ray-based travel time tomography.
NASA Astrophysics Data System (ADS)
Gong, Ming; Hofer, B.; Zallo, E.; Trotta, R.; Luo, Jun-Wei; Schmidt, O. G.; Zhang, Chuanwei
2014-05-01
We develop an effective model to describe the statistical properties of exciton fine structure splitting (FSS) and polarization angle in quantum dot ensembles (QDEs) using only a few symmetry-related parameters. The connection between the effective model and the random matrix theory is established. Such effective model is verified both theoretically and experimentally using several rather different types of QDEs, each of which contains hundreds to thousands of QDs. The model naturally addresses three fundamental issues regarding the FSS and polarization angels of QDEs, which are frequently encountered in both theories and experiments. The answers to these fundamental questions yield an approach to characterize the optical properties of QDEs. Potential applications of the effective model are also discussed.
Lei, Yi; Li, Jianqiang; Wu, Rui; Fan, Yuting; Fu, Songnian; Yin, Feifei; Dai, Yitang; Xu, Kun
2017-06-01
Based on the observed random fluctuation phenomenon of speckle pattern across multimode fiber (MMF) facet and received optical power distribution across three output ports, we experimentally investigate the statistic characteristics of a 3×3 radio frequency multiple-input multiple-output (MIMO) channel enabled by mode division multiplexing in a conventional 50 µm MMF using non-mode-selective three-dimensional waveguide photonic lanterns as mode multiplexer and demultiplexer. The impacts of mode coupling on the MIMO channel coefficients, channel matrix, and channel capacity have been analyzed over different fiber lengths. The results indicate that spatial multiplexing benefits from the greater fiber length with stronger mode coupling, despite a higher optical loss.
Staggered chiral random matrix theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, James C.
2011-02-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
Discrete-time systems with random switches: From systems stability to networks synchronization.
Guo, Yao; Lin, Wei; Ho, Daniel W C
2016-03-01
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developed approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.
Discrete-time systems with random switches: From systems stability to networks synchronization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yao; Lin, Wei, E-mail: wlin@fudan.edu.cn; Shanghai Key Laboratory of Contemporary Applied Mathematics, LMNS, and Shanghai Center for Mathematical Sciences, Shanghai 200433
2016-03-15
In this article, we develop some approaches, which enable us to more accurately and analytically identify the essential patterns that guarantee the almost sure stability of discrete-time systems with random switches. We allow for the case that the elements in the switching connection matrix even obey some unbounded and continuous-valued distributions. In addition to the almost sure stability, we further investigate the almost sure synchronization in complex dynamical networks consisting of randomly connected nodes. Numerical examples illustrate that a chaotic dynamics in the synchronization manifold is preserved when statistical parameters enter some almost sure synchronization region established by the developedmore » approach. Moreover, some delicate configurations are considered on probability space for ensuring synchronization in networks whose nodes are described by nonlinear maps. Both theoretical and numerical results on synchronization are presented by setting only a few random connections in each switch duration. More interestingly, we analytically find it possible to achieve almost sure synchronization in the randomly switching complex networks even with very large population sizes, which cannot be easily realized in non-switching but deterministically connected networks.« less
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Experimental Study of Quantum Graphs with Microwave Networks
NASA Astrophysics Data System (ADS)
Fu, Ziyuan; Koch, Trystan; Antonsen, Thomas; Ott, Edward; Anlage, Steven; Wave Chaos Team
An experimental setup consisting of microwave networks is used to simulate quantum graphs. The networks are constructed from coaxial cables connected by T junctions. The networks are built for operation both at room temperature and superconducting versions that operate at cryogenic temperatures. In the experiments, a phase shifter is connected to one of the network bonds to generate an ensemble of quantum graphs by varying the phase delay. The eigenvalue spectrum is found from S-parameter measurements on one-port graphs. With the experimental data, the nearest-neighbor spacing statistics and the impedance statistics of the graphs are examined. It is also demonstrated that time-reversal invariance for microwave propagation in the graphs can be broken without increasing dissipation significantly by making nodes with circulators. Random matrix theory (RMT) successfully describes universal statistical properties of the system. We acknowledge support under contract AFOSR COE Grant FA9550-15-1-0171.
Electron Waiting Times in Mesoscopic Conductors
NASA Astrophysics Data System (ADS)
Albert, Mathias; Haack, Géraldine; Flindt, Christian; Büttiker, Markus
2012-05-01
Electron transport in mesoscopic conductors has traditionally involved investigations of the mean current and the fluctuations of the current. A complementary view on charge transport is provided by the distribution of waiting times between charge carriers, but a proper theoretical framework for coherent electronic systems has so far been lacking. Here we develop a quantum theory of electron waiting times in mesoscopic conductors expressed by a compact determinant formula. We illustrate our methodology by calculating the waiting time distribution for a quantum point contact and find a crossover from Wigner-Dyson statistics at full transmission to Poisson statistics close to pinch-off. Even when the low-frequency transport is noiseless, the electrons are not equally spaced in time due to their inherent wave nature. We discuss the implications for renewal theory in mesoscopic systems and point out several analogies with level spacing statistics and random matrix theory.
Many-body-localization: strong disorder perturbative approach for the local integrals of motion
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2018-05-01
For random quantum spin models, the strong disorder perturbative expansion of the local integrals of motion around the real-spin operators is revisited. The emphasis is on the links with other properties of the many-body-localized phase, in particular the memory in the dynamics of the local magnetizations and the statistics of matrix elements of local operators in the eigenstate basis. Finally, this approach is applied to analyze the many-body-localization transition in a toy model studied previously from the point of view of the entanglement entropy.
Cardaropoli, Daniele; Tamagnone, Lorenzo; Roffredo, Alessandro; Gaveglio, Lorena
2012-03-01
Connective tissue graft (CTG) plus coronally advanced flap (CAF) is the reference therapy for root coverage. The aim of the present study is to evaluate the use of a porcine collagen matrix (PCM) plus CAF as an alternative to CTG+CAF for the treatment of gingival recessions (REC), in a prospective randomized, controlled clinical trial. Eighteen adult patients participated in this study. The patients presented 22 single Miller's Class I or II REC, randomly assigned to the test (PCM+CAF) or control (CTG+CAF) group. REC, probing depth, clinical attachment level (CAL), and width of keratinized tissue (KG) were evaluated at 12 months. In addition, the gingival thickness (GT) was measured 1mm apical to the bottom of the sulcus. At 12 months, mean REC was 0.23 mm for test sites and 0.09 mm for control sites (P <0.01), whereas percentage of root coverage was 94.32% and 96.97%, respectively. CAL gain was 2.41 mm in test sites and 2.95 mm in control sites (P <0.01). KG gain was 1.23 mm in the test group and 1.27 mm in the control group (P <0.01). In test sites, GT changed from 0.82 to 1.82 mm, and in control sites, from 0.86 to 2.09 mm (P <0.01). Within the limits of the study, both treatment procedures resulted in significant reduction in REC at 12 months. No statistically significant differences were found between PCM+CAF and CTG+CAF with regard to any clinical parameter. The collagen matrix represents a possible alternative to CTG.
A stochastic Markov chain model to describe lung cancer growth and metastasis.
Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter
2012-01-01
A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.
Unifying model for random matrix theory in arbitrary space dimensions
NASA Astrophysics Data System (ADS)
Cicuta, Giovanni M.; Krausser, Johannes; Milkus, Rico; Zaccone, Alessio
2018-03-01
A sparse random block matrix model suggested by the Hessian matrix used in the study of elastic vibrational modes of amorphous solids is presented and analyzed. By evaluating some moments, benchmarked against numerics, differences in the eigenvalue spectrum of this model in different limits of space dimension d , and for arbitrary values of the lattice coordination number Z , are shown and discussed. As a function of these two parameters (and their ratio Z /d ), the most studied models in random matrix theory (Erdos-Renyi graphs, effective medium, and replicas) can be reproduced in the various limits of block dimensionality d . Remarkably, the Marchenko-Pastur spectral density (which is recovered by replica calculations for the Laplacian matrix) is reproduced exactly in the limit of infinite size of the blocks, or d →∞ , which clarifies the physical meaning of space dimension in these models. We feel that the approximate results for d =3 provided by our method may have many potential applications in the future, from the vibrational spectrum of glasses and elastic networks to wave localization, disordered conductors, random resistor networks, and random walks.
Acellular dermal matrix allograft. The results of controlled randomized clinical studies.
Novaes, Arthur Belém; de Barros, Raquel Rezende Martins
2008-10-01
The aim of this presentation was to discuss the effectiveness of the acellular dermal matrix in root coverage therapy and in alveolar ridge augmentation, based on three controlled randomized clinical trials conducted by our research team (Novaes Jr et al., 2001; Barros et al., 2005; Luczyszyn et al., 2005). The first and second studies highlight the allograft's performance in the treatment of gingival recession. In both studies, clinical parameters were assessed prior to surgery and 6 or 12 months post-surgery. The first one compared the use of the acellular dermal matrix with the subepithelial connective tissue graft and showed 1.83 and 2.10 mm of recession reduction, respectively. Because no statistically significant differences between the groups were observed, it was concluded that the allograft can be used as a substitute for the autograft. In the second study, a new surgical approach was compared to a conventional surgical procedure described by Langer and Langer in 1985. A statistically significant greater recession reduction favoring the test procedure was achieved. The percentage of root coverage was 82.5% and 62.3% for test and control groups. Thus the new technique was considered more suitable for the treatment of gingival recessions with the allograft. Finally, the third study evaluated the allograft as a membrane, associated or not with a resorbable hydroxyapatite in bone regeneration to prevent ridge deformities. In one group the extraction sockets were covered only by the allograft and in the other, the alveoli were also filled with the resorbable hydroxyapatite. After six months, both treatments were able to preserve ridge thickness, considering the pre-operative values. In conclusion, no adverse healing events were noted with the use of allograft in site preservation procedures, and sites treated with the combination of allograft plus resorbable hydroxyapatite showed significantly greater ridge thickness preservation at six months when compared to sites treated with allograft alone (6.8 +/- 1.26 and 5.53 +/- 1.06 respectively).
Henschel, Volkmar; Engel, Jutta; Hölzel, Dieter; Mansmann, Ulrich
2009-02-10
Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.
The feasibility and stability of large complex biological networks: a random matrix approach.
Stone, Lewi
2018-05-29
In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.
Periodic orbit spectrum in terms of Ruelle-Pollicott resonances
NASA Astrophysics Data System (ADS)
Leboeuf, P.
2004-02-01
Fully chaotic Hamiltonian systems possess an infinite number of classical solutions which are periodic, e.g., a trajectory “p” returns to its initial conditions after some fixed time τp. Our aim is to investigate the spectrum {τ1,τ2,…} of periods of the periodic orbits. An explicit formula for the density ρ(τ)=∑pδ(τ-τp) is derived in terms of the eigenvalues of the classical evolution operator. The density is naturally decomposed into a smooth part plus an interferent sum over oscillatory terms. The frequencies of the oscillatory terms are given by the imaginary part of the complex eigenvalues (Ruelle-Pollicott resonances). For large periods, corrections to the well-known exponential growth of the smooth part of the density are obtained. An alternative formula for ρ(τ) in terms of the zeros and poles of the Ruelle ζ function is also discussed. The results are illustrated with the geodesic motion in billiards of constant negative curvature. Connections with the statistical properties of the corresponding quantum eigenvalues, random-matrix theory, and discrete maps are also considered. In particular, a random-matrix conjecture is proposed for the eigenvalues of the classical evolution operator of chaotic billiards.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271
Data-Enabled Quantification of Aluminum Microstructural Damage Under Tensile Loading
NASA Astrophysics Data System (ADS)
Wayne, Steven F.; Qi, G.; Zhang, L.
2016-08-01
The study of material failure with digital analytics is in its infancy and offers a new perspective to advance our understanding of damage initiation and evolution in metals. In this article, we study the failure of aluminum using data-enabled methods, statistics and data mining. Through the use of tension tests, we establish a multivariate acoustic-data matrix of random damage events, which typically are not visible and are very difficult to measure due to their variability, diversity and interactivity during damage processes. Aluminium alloy 6061-T651 and single crystal aluminium with a (111) orientation were evaluated by comparing the collection of acoustic signals from damage events caused primarily by slip in the single crystal and multimode fracture of the alloy. We found the resulting acoustic damage-event data to be large semi-structured volumes of Big Data with the potential to be mined for information that describes the materials damage state under strain. Our data-enabled analyses has allowed us to determine statistical distributions of multiscale random damage that provide a means to quantify the material damage state.
Kakagia, Despoina D; Kazakos, Konstantinos J; Xarchas, Konstantinos C; Karanikas, Michael; Georgiadis, George S; Tripsiannis, Gregory; Manolas, Constantinos
2007-01-01
This study tests the hypothesis that addition of a protease-modulating matrix enhances the efficacy of autologous growth factors in diabetic ulcers. Fifty-one patients with chronic diabetic foot ulcers were managed as outpatients at the Democritus University Hospital of Alexandroupolis and followed up for 8 weeks. All target ulcers were > or = 2.5 cm in any one dimension and had been previously treated only with moist gauze. Patients were randomly allocated in three groups of 17 patients each: Group A was treated only with the oxidized regenerated cellulose/collagen biomaterial (Promogran, Johnson & Johnson, New Brunswick, NJ), Group B was treated only with autologous growth factors delivered by Gravitational Platelet Separation System (GPS, Biomet), and Group C was managed by a combination of both. All ulcers were digitally photographed at initiation of the study and then at change of dressings once weekly. Computerized planimetry (Texas Health Science Center ImageTool, Version 3.0) was used to assess ulcer dimensions that were analyzed for homogeneity and significance using the Statistical Package for Social Sciences, Version 13.0. Post hoc analysis revealed that there was significantly greater reduction of all three dimensions of the ulcers in Group C compared to Groups A and B (all P<.001). Although reduction of ulcer dimensions was greater in Group A than in Group B, these differences did not reach statistical significance. It is concluded that protease-modulating dressings act synergistically with autologous growth factors and enhance their efficacy in diabetic foot ulcers.
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
NASA Astrophysics Data System (ADS)
Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-12-01
In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
Statistical behavior of time dynamics evolution of HIV infection
NASA Astrophysics Data System (ADS)
González, Ramón E. R.; Santos, Iury A. X.; Nunes, Marcos G. P.; de Oliveira, Viviane M.; Barbosa, Anderson L. R.
2017-09-01
We use the tools of the random matrix theory (RMT) to investigate the statistical behavior of the evolution of human immunodeficiency virus (HIV) infection. By means of the nearest-neighbor spacing distribution we have identified four distinct regimes of the evolution of HIV infection. We verified that at the beginning of the so-called clinical latency phase the concentration of infected cells grows slowly and evolves in a correlated way. This regime is followed by another one in which the correlation is lost and that in turn leads the system to a regime in which the increase of infected cells is faster and correlated. In the final phase, the one in which acquired immunodeficiency syndrome (AIDS) is stablished, the system presents maximum correlation as demonstrated by GOE distribution.
On optimal current patterns for electrical impedance tomography.
Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D
2005-02-01
We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.
[Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].
Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan
2016-03-01
To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.
Spectrum of walk matrix for Koch network and its application
NASA Astrophysics Data System (ADS)
Xie, Pinchen; Lin, Yuan; Zhang, Zhongzhi
2015-06-01
Various structural and dynamical properties of a network are encoded in the eigenvalues of walk matrix describing random walks on the network. In this paper, we study the spectra of walk matrix of the Koch network, which displays the prominent scale-free and small-world features. Utilizing the particular architecture of the network, we obtain all the eigenvalues and their corresponding multiplicities. Based on the link between the eigenvalues of walk matrix and random target access time defined as the expected time for a walker going from an arbitrary node to another one selected randomly according to the steady-state distribution, we then derive an explicit solution to the random target access time for random walks on the Koch network. Finally, we corroborate our computation for the eigenvalues by enumerating spanning trees in the Koch network, using the connection governing eigenvalues and spanning trees, where a spanning tree of a network is a subgraph of the network, that is, a tree containing all the nodes.
NASA Astrophysics Data System (ADS)
Pavlichin, Dmitri S.; Mabuchi, Hideo
2014-06-01
Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.
NASA Technical Reports Server (NTRS)
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Statistical mechanics of homogeneous partly pinned fluid systems.
Krakoviack, Vincent
2010-12-01
The homogeneous partly pinned fluid systems are simple models of a fluid confined in a disordered porous matrix obtained by arresting randomly chosen particles in a one-component bulk fluid or one of the two components of a binary mixture. In this paper, their configurational properties are investigated. It is shown that a peculiar complementarity exists between the mobile and immobile phases, which originates from the fact that the solid is prepared in presence of and in equilibrium with the adsorbed fluid. Simple identities follow, which connect different types of configurational averages, either relative to the fluid-matrix system or to the bulk fluid from which it is prepared. Crucial simplifications result for the computation of important structural quantities, both in computer simulations and in theoretical approaches. Finally, possible applications of the model in the field of dynamics in confinement or in strongly asymmetric mixtures are suggested.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
NASA Astrophysics Data System (ADS)
Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo
2005-08-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.
NASA Astrophysics Data System (ADS)
Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio
2013-05-01
Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.
Equilibrium structure of δ-Bi(2)O(3) from first principles.
Music, Denis; Konstantinidis, Stephanos; Schneider, Jochen M
2009-04-29
Using ab initio calculations, we have systematically studied the structure of δ-Bi(2)O(3) (fluorite prototype, 25% oxygen vacancies) probing [Formula: see text] and combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering, random distribution of oxygen vacancies with two different statistical descriptions as well as local relaxations. We observe that the combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering is the most stable configuration. Radial distribution functions for these configurations can be classified as discrete (ordered configurations) and continuous (random configurations). This classification can be understood on the basis of local structural relaxations. Up to 28.6% local relaxation of the oxygen sublattice is present in the random configurations, giving rise to continuous distribution functions. The phase stability obtained may be explained with the bonding analysis. Electron lone-pair charges in the predominantly ionic Bi-O matrix may stabilize the combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering.
NASA Astrophysics Data System (ADS)
Telfeyan, Katherine; Ware, S. Doug; Reimus, Paul W.; Birdsell, Kay H.
2018-02-01
Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.
Gravitational lensing by eigenvalue distributions of random matrix models
NASA Astrophysics Data System (ADS)
Martínez Alonso, Luis; Medina, Elena
2018-05-01
We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
Generating an Empirical Probability Distribution for the Andrews-Pregibon Statistic.
ERIC Educational Resources Information Center
Jarrell, Michele G.
A probability distribution was developed for the Andrews-Pregibon (AP) statistic. The statistic, developed by D. F. Andrews and D. Pregibon (1978), identifies multivariate outliers. It is a ratio of the determinant of the data matrix with an observation deleted to the determinant of the entire data matrix. Although the AP statistic has been used…
High-dimensional statistical inference: From vector to matrix
NASA Astrophysics Data System (ADS)
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Statistics of resonances for a class of billiards on the Poincaré half-plane
NASA Astrophysics Data System (ADS)
Howard, P. J.; Mota-Furtado, F.; O'Mahony, P. F.; Uski, V.
2005-12-01
The lower boundary of Artin's billiard on the Poincaré half-plane is continuously deformed to generate a class of billiards with classical dynamics varying from fully integrable to completely chaotic. The quantum scattering problem in these open billiards is described and the statistics of both real and imaginary parts of the resonant momenta are investigated. The evolution of the resonance positions is followed as the boundary is varied which leads to large changes in their distribution. The transition to arithmetic chaos in Artin's billiard, which is responsible for the Poissonian level-spacing statistics of the bound states in the continuum (cusp forms) at the same time as the formation of a set of resonances all with width \\frac{1}{4} and real parts determined by the zeros of Riemann's zeta function, is closely examined. Regimes are found which obey the universal predictions of random matrix theory (RMT) as well as exhibiting non-universal long-range correlations. The Brody parameter is used to describe the transitions between different regimes.
Disentangling giant component and finite cluster contributions in sparse random matrix spectra.
Kühn, Reimer
2016-04-01
We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.
Koromantzos, Panagiotis A; Makrilakis, Konstantinos; Dereka, Xanthippi; Offenbacher, Steven; Katsilambros, Nicholas; Vrotsos, Ioannis A; Madianos, Phoebus N
2012-01-01
It is well accepted that glycemic control in patients with diabetes mellitus (DM) is affected by systemic inflammation and oxidative stress. The effect of periodontal therapy on these systemic factors may be related to improvement on glycemic status. The aim of the present study is to assess over a period of 6 months the effect of non-surgical periodontal therapy on serum levels of high-sensitivity C-reactive protein (hsCRP), d-8-iso prostaglandin F2a (d-8-iso) as a marker of oxidative stress, and matrix metalloproteinase (MMP)-2 and MMP-9 on patients with type 2 DM. Sixty participants with type 2 DM and moderate to severe periodontal disease were randomized into intervention (IG) and control (CG) groups. IG received scaling and root planing, whereas CG received supragingival cleaning at baseline and scaling and root planing at 6 months. Participants of both groups were evaluated at baseline and 1, 3, and 6 months. Periodontal data recorded at each visit included probing depth, clinical attachment loss, bleeding on probing, and gingival index. Blood was collected at each visit for the assay of serum glycated hemoglobin A1c (A1c), hsCRP, d-8-iso, MMP-2, and MMP-9. Although there was a trend to a reduction in hsCRP, d-8-iso and MMP-9 it did not reach statistical significance. MMP-2 levels remained unchanged after periodontal treatment. Effective non-surgical periodontal treatment of participants with type 2 DM and moderate to severe periodontal disease improved significantly A1c levels but did not result in a statistically significant improvement in hsCRP, d-8-iso, MMP-2, and MMP-9 levels.
Shepherd, Neal; Greenwell, Henry; Hill, Margaret; Vidal, Ricardo; Scheetz, James P
2009-03-01
The primary aim of this randomized, controlled, blinded clinical pilot study was to compare the percentage of recession defect coverage obtained with a coronally positioned tunnel (CPT) plus an acellular dermal matrix allograft (ADM) to that of a CPT plus ADM and platelet-rich plasma (CPT/PRP) 4 months post-surgically. Eighteen patients with Miller Class I or II recession >or=3 mm at one site were treated and followed for 4 months. Nine patients received a CPT plus ADM and were considered the positive control group. The test group consisted of nine patients treated with a CPT plus ADM and PRP. Patients were randomly selected by a coin toss to receive the test or positive control treatment. The mean recession at the initial examination for the CPT group was 3.6 +/- 1.0 mm, which was reduced to 1.0 +/- 1.0 mm at the 4-month examination for a gain of 2.6 +/- 1.5 mm or 70% defect coverage (P <0.05). The mean recession at the initial examination for the CPT/PRP group was 3.3 +/- 0.7 mm, which was reduced to 0.4 +/- 0.7 mm at the 4-month examination for a gain of 2.9 +/- 0.5 mm or 90% defect coverage (P <0.05). There were no statistically significant differences between the groups (P >0.05). The CPT plus ADM and PRP produced defect coverage of 90%, whereas the CPT with ADM produced only 70% defect coverage. This difference was not statistically significant, but it may be clinically significant.
Laplace approximation for Bessel functions of matrix argument
NASA Astrophysics Data System (ADS)
Butler, Ronald W.; Wood, Andrew T. A.
2003-06-01
We derive Laplace approximations to three functions of matrix argument which arise in statistics and elsewhere: matrix Bessel A[nu]; matrix Bessel B[nu]; and the type II confluent hypergeometric function of matrix argument, [Psi]. We examine the theoretical and numerical properties of the approximations. On the theoretical side, it is shown that the Laplace approximations to A[nu], B[nu] and [Psi] given here, together with the Laplace approximations to the matrix argument functions 1F1 and 2F1 presented in Butler and Wood (Laplace approximations to hyper-geometric functions with matrix argument, Ann. Statist. (2002)), satisfy all the important confluence relations and symmetry relations enjoyed by the original functions.
Agarwal, Jayant P; Mendenhall, Shaun D; Anderson, Layla A; Ying, Jian; Boucher, Kenneth M; Liu, Ting; Neumayer, Leigh A
2015-01-01
Recent literature has focused on the advantages and disadvantages of using acellular dermal matrix in breast reconstruction. Many of the reported data are from low level-of-evidence studies, leaving many questions incompletely answered. The present randomized trial provides high-level data on the incidence and severity of complications in acellular dermal matrix breast reconstruction between two commonly used types of acellular dermal matrix. A prospective randomized trial was conducted to compare outcomes of immediate staged tissue expander breast reconstruction using either AlloDerm or DermaMatrix. The impact of body mass index, smoking, diabetes, mastectomy type, radiation therapy, and chemotherapy on outcomes was analyzed. Acellular dermal matrix biointegration was analyzed clinically and histologically. Patient satisfaction was assessed by means of preoperative and postoperative surveys. Logistic regression models were used to identify predictors of complications. This article reports on the study design, surgical technique, patient characteristics, and preoperative survey results, with outcomes data in a separate report. After 2.5 years, we successfully enrolled and randomized 128 patients (199 breasts). The majority of patients were healthy nonsmokers, with 41 percent of patients receiving radiation therapy and 49 percent receiving chemotherapy. Half of the mastectomies were prophylactic, with nipple-sparing mastectomy common in both cancer and prophylactic cases. Preoperative survey results indicate that patients were satisfied with their premastectomy breast reconstruction education. Results from the Breast Reconstruction Evaluation Using Acellular Dermal Matrix as a Sling Trial will assist plastic surgeons in making evidence-based decisions regarding acellular dermal matrix-assisted tissue expander breast reconstruction. Therapeutic, II.
A numerical approximation to the elastic properties of sphere-reinforced composites
NASA Astrophysics Data System (ADS)
Segurado, J.; Llorca, J.
2002-10-01
Three-dimensional cubic unit cells containing 30 non-overlapping identical spheres randomly distributed were generated using a new, modified random sequential adsortion algorithm suitable for particle volume fractions of up to 50%. The elastic constants of the ensemble of spheres embedded in a continuous and isotropic elastic matrix were computed through the finite element analysis of the three-dimensional periodic unit cells, whose size was chosen as a compromise between the minimum size required to obtain accurate results in the statistical sense and the maximum one imposed by the computational cost. Three types of materials were studied: rigid spheres and spherical voids in an elastic matrix and a typical composite made up of glass spheres in an epoxy resin. The moduli obtained for different unit cells showed very little scatter, and the average values obtained from the analysis of four unit cells could be considered very close to the "exact" solution to the problem, in agreement with the results of Drugan and Willis (J. Mech. Phys. Solids 44 (1996) 497) referring to the size of the representative volume element for elastic composites. They were used to assess the accuracy of three classical analytical models: the Mori-Tanaka mean-field analysis, the generalized self-consistent method, and Torquato's third-order approximation.
Fee, Timothy; Surianarayanan, Swetha; Downs, Crawford; Zhou, Yong; Berry, Joel
2016-01-01
To examine the influence of substrate topology on the behavior of fibroblasts, tissue engineering scaffolds were electrospun from polycaprolactone (PCL) and a blend of PCL and gelatin (PCL+Gel) to produce matrices with both random and aligned nanofibrous orientations. The addition of gelatin to the scaffold was shown to increase the hydrophilicity of the PCL matrix and to increase the proliferation of NIH3T3 cells compared to scaffolds of PCL alone. The orientation of nanofibers within the matrix did not have an effect on the proliferation of adherent cells, but cells on aligned substrates were shown to elongate and align parallel to the direction of substrate fiber alignment. A microarray of cyotoskeleton regulators was probed to examine differences in gene expression between cells grown on an aligned and randomly oriented substrates. It was found that transcriptional expression of eight genes was statistically different between the two conditions, with all of them being upregulated in the aligned condition. The proteins encoded by these genes are linked to production and polymerization of actin microfilaments, as well as focal adhesion assembly. Taken together, the data indicates NIH3T3 fibroblasts on aligned substrates align themselves parallel with their substrate and increase production of actin and focal adhesion related genes.
Partial transpose of random quantum states: Exact formulas and meanders
NASA Astrophysics Data System (ADS)
Fukuda, Motohisa; Śniady, Piotr
2013-04-01
We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.
QCD-inspired spectra from Blue's functions
NASA Astrophysics Data System (ADS)
Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail
1996-02-01
We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.
Social patterns revealed through random matrix theory
NASA Astrophysics Data System (ADS)
Sarkar, Camellia; Jalan, Sarika
2014-11-01
Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.
Zhang, Du; Su, Neil Qiang; Yang, Weitao
2017-07-20
The GW self-energy, especially G 0 W 0 based on the particle-hole random phase approximation (phRPA), is widely used to study quasiparticle (QP) energies. Motivated by the desirable features of the particle-particle (pp) RPA compared to the conventional phRPA, we explore the pp counterpart of GW, that is, the T-matrix self-energy, formulated with the eigenvectors and eigenvalues of the ppRPA matrix. We demonstrate the accuracy of the T-matrix method for molecular QP energies, highlighting the importance of the pp channel for calculating QP spectra.
EvolQG - An R package for evolutionary quantitative genetics
Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel
2016-01-01
We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352
NASA Astrophysics Data System (ADS)
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
Randomized Dynamic Mode Decomposition
NASA Astrophysics Data System (ADS)
Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan
2017-11-01
The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.
Telfeyan, Katherine Christina; Ware, Stuart Doug; Reimus, Paul William; ...
2018-01-31
Here, diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged frommore » 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Telfeyan, Katherine Christina; Ware, Stuart Doug; Reimus, Paul William
Here, diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating effective matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of effective matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged frommore » 14 to 30%, and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than effective matrix diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields effective matrix diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less
Polarimetric signatures of a coniferous forest canopy based on vector radiative transfer theory
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.; Amar, F.; Mougin, E.; Lopes, A.; Beaudoin, A.
1992-01-01
Complete polarization signatures of a coniferous forest canopy are studied by the iterative solution of the vector radiative transfer equations up to the second order. The forest canopy constituents (leaves, branches, stems, and trunk) are embedded in a multi-layered medium over a rough interface. The branches, stems and trunk scatterers are modeled as finite randomly oriented cylinders. The leaves are modeled as randomly oriented needles. For a plane wave exciting the canopy, the average Mueller matrix is formulated in terms of the iterative solution of the radiative transfer solution and used to determine the linearly polarized backscattering coefficients, the co-polarized and cross-polarized power returns, and the phase difference statistics. Numerical results are presented to investigate the effect of transmitting and receiving antenna configurations on the polarimetric signature of a pine forest. Comparison is made with measurements.
NASA Astrophysics Data System (ADS)
He, Honghui; Dong, Yang; Zhou, Jialing; Ma, Hui
2017-03-01
As one of the salient features of light, polarization contains abundant structural and optical information of media. Recently, as a comprehensive description of polarization property, the Mueller matrix polarimetry has been applied to various biomedical studies such as cancerous tissues detections. In previous works, it has been found that the structural information encoded in the 2D Mueller matrix images can be presented by other transformed parameters with more explicit relationship to certain microstructural features. In this paper, we present a statistical analyzing method to transform the 2D Mueller matrix images into frequency distribution histograms (FDHs) and their central moments to reveal the dominant structural features of samples quantitatively. The experimental results of porcine heart, intestine, stomach, and liver tissues demonstrate that the transformation parameters and central moments based on the statistical analysis of Mueller matrix elements have simple relationships to the dominant microstructural properties of biomedical samples, including the density and orientation of fibrous structures, the depolarization power, diattenuation and absorption abilities. It is shown in this paper that the statistical analysis of 2D images of Mueller matrix elements may provide quantitative or semi-quantitative criteria for biomedical diagnosis.
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.
2016-01-01
Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.
Dynamic laser speckle analyzed considering inhomogeneities in the biological sample
NASA Astrophysics Data System (ADS)
Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico
2017-04-01
Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.
Chaotic oscillation and random-number generation based on nanoscale optical-energy transfer.
Naruse, Makoto; Kim, Song-Ju; Aono, Masashi; Hori, Hirokazu; Ohtsu, Motoichi
2014-08-12
By using nanoscale energy-transfer dynamics and density matrix formalism, we demonstrate theoretically and numerically that chaotic oscillation and random-number generation occur in a nanoscale system. The physical system consists of a pair of quantum dots (QDs), with one QD smaller than the other, between which energy transfers via optical near-field interactions. When the system is pumped by continuous-wave radiation and incorporates a timing delay between two energy transfers within the system, it emits optical pulses. We refer to such QD pairs as nano-optical pulsers (NOPs). Irradiating an NOP with external periodic optical pulses causes the oscillating frequency of the NOP to synchronize with the external stimulus. We find that chaotic oscillation occurs in the NOP population when they are connected by an external time delay. Moreover, by evaluating the time-domain signals by statistical-test suites, we confirm that the signals are sufficiently random to qualify the system as a random-number generator (RNG). This study reveals that even relatively simple nanodevices that interact locally with each other through optical energy transfer at scales far below the wavelength of irradiating light can exhibit complex oscillatory dynamics. These findings are significant for applications such as ultrasmall RNGs.
NASA Astrophysics Data System (ADS)
Olekhno, N. A.; Beltukov, Y. M.
2018-05-01
Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0
Filipiak, Katarzyna; Klein, Daniel; Roy, Anuradha
2017-01-01
The problem of testing the separability of a covariance matrix against an unstructured variance-covariance matrix is studied in the context of multivariate repeated measures data using Rao's score test (RST). The RST statistic is developed with the first component of the separable structure as a first-order autoregressive (AR(1)) correlation matrix or an unstructured (UN) covariance matrix under the assumption of multivariate normality. It is shown that the distribution of the RST statistic under the null hypothesis of any separability does not depend on the true values of the mean or the unstructured components of the separable structure. A significant advantage of the RST is that it can be performed for small samples, even smaller than the dimension of the data, where the likelihood ratio test (LRT) cannot be used, and it outperforms the standard LRT in a number of contexts. Monte Carlo simulations are then used to study the comparative behavior of the null distribution of the RST statistic, as well as that of the LRT statistic, in terms of sample size considerations, and for the estimation of the empirical percentiles. Our findings are compared with existing results where the first component of the separable structure is a compound symmetry (CS) correlation matrix. It is also shown by simulations that the empirical null distribution of the RST statistic converges faster than the empirical null distribution of the LRT statistic to the limiting χ 2 distribution. The tests are implemented on a real dataset from medical studies. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wavelet analysis of biological tissue's Mueller-matrix images
NASA Astrophysics Data System (ADS)
Tomka, Yu. Ya.
2008-05-01
The interrelations between statistics of the 1st-4th orders of the ensemble of Mueller-matrix images and geometric structure of birefringent architectonic nets of different morphological structure have been analyzed. The sensitivity of asymmetry and excess of statistic distributions of matrix elements Cik to changing of orientation structure of optically anisotropic protein fibrils of physiologically normal and pathologically changed biological tissues architectonics has been shown.
Near-optimal matrix recovery from random linear measurements.
Romanov, Elad; Gavish, Matan
2018-06-25
In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun
2018-03-01
In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmeyer, Michaela; Guterding, Daniel; Hirschfeld, P. J.
2016-12-21
In the framework of a multiorbital Hubbard model description of superconductivity, a matrix formulation of the superconducting pairing interaction that has been widely used is designed to treat spin, charge, and orbital fluctuations within a random phase approximation (RPA). In terms of Feynman diagrams, this takes into account particle-hole ladder and bubble contributions as expected. It turns out, however, that this matrix formulation also generates additional terms which have the diagrammatic structure of vertex corrections. Furthermore we examine these terms and discuss the relationship between the matrix-RPA superconducting pairing interaction and the Feynman diagrams that it sums.
Structure of a financial cross-correlation matrix under attack
NASA Astrophysics Data System (ADS)
Lim, Gyuchang; Kim, SooYong; Kim, Junghwan; Kim, Pyungsoo; Kang, Yoonjong; Park, Sanghoon; Park, Inho; Park, Sang-Bum; Kim, Kyungsik
2009-09-01
We investigate the structure of a perturbed stock market in terms of correlation matrices. For the purpose of perturbing a stock market, two distinct methods are used, namely local and global perturbation. The former involves replacing a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series while the latter reconstructs the cross-correlation matrix just after replacing the original return series with Gaussian-distributed time series. Concerning the local case, it is a technical study only and there is no attempt to model reality. The term ‘global’ means the overall effect of the replacement on other untouched returns. Through statistical analyses such as random matrix theory (RMT), network theory, and the correlation coefficient distributions, we show that the global structure of a stock market is vulnerable to perturbation. However, apart from in the analysis of inverse participation ratios (IPRs), the vulnerability becomes dull under a small-scale perturbation. This means that these analysis tools are inappropriate for monitoring the whole stock market due to the low sensitivity of a stock market to a small-scale perturbation. In contrast, when going down to the structure of business sectors, we confirm that correlation-based business sectors are regrouped in terms of IPRs. This result gives a clue about monitoring the effect of hidden intentions, which are revealed via portfolios taken mostly by large investors.
Random walks with long-range steps generated by functions of Laplacian matrices
NASA Astrophysics Data System (ADS)
Riascos, A. P.; Michelitsch, T. M.; Collet, B. A.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2018-04-01
In this paper, we explore different Markovian random walk strategies on networks with transition probabilities between nodes defined in terms of functions of the Laplacian matrix. We generalize random walk strategies with local information in the Laplacian matrix, that describes the connections of a network, to a dynamic determined by functions of this matrix. The resulting processes are non-local allowing transitions of the random walker from one node to nodes beyond its nearest neighbors. We find that only two types of Laplacian functions are admissible with distinct behaviors for long-range steps in the infinite network limit: type (i) functions generate Brownian motions, type (ii) functions Lévy flights. For this asymptotic long-range step behavior only the lowest non-vanishing order of the Laplacian function is relevant, namely first order for type (i), and fractional order for type (ii) functions. In the first part, we discuss spectral properties of the Laplacian matrix and a series of relations that are maintained by a particular type of functions that allow to define random walks on any type of undirected connected networks. Once described general properties, we explore characteristics of random walk strategies that emerge from particular cases with functions defined in terms of exponentials, logarithms and powers of the Laplacian as well as relations of these dynamics with non-local strategies like Lévy flights and fractional transport. Finally, we analyze the global capacity of these random walk strategies to explore networks like lattices and trees and different types of random and complex networks.
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Singular Behavior of the Leading Lyapunov Exponent of a Product of Random {2 × 2} Matrices
NASA Astrophysics Data System (ADS)
Genovese, Giuseppe; Giacomin, Giambattista; Greenblatt, Rafael Leon
2017-05-01
We consider a certain infinite product of random {2 × 2} matrices appearing in the solution of some 1 and 1 + 1 dimensional disordered models in statistical mechanics, which depends on a parameter ɛ > 0 and on a real random variable with distribution {μ}. For a large class of {μ}, we prove the prediction by Derrida and Hilhorst (J Phys A 16:2641, 1983) that the Lyapunov exponent behaves like {C ɛ^{2 α}} in the limit {ɛ \\searrow 0}, where {α \\in (0,1)} and {C > 0} are determined by {μ}. Derrida and Hilhorst performed a two-scale analysis of the integral equation for the invariant distribution of the Markov chain associated to the matrix product and obtained a probability measure that is expected to be close to the invariant one for small {ɛ}. We introduce suitable norms and exploit contractivity properties to show that such a probability measure is indeed close to the invariant one in a sense that implies a suitable control of the Lyapunov exponent.
Seven lessons from manyfield inflation in random potentials
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2018-01-01
We study inflation in models with many interacting fields subject to randomly generated scalar potentials. We use methods from non-equilibrium random matrix theory to construct the potentials and an adaption of the `transport method' to evolve the two-point correlators during inflation. This construction allows, for the first time, for an explicit study of models with up to 100 interacting fields supporting a period of `approximately saddle-point' inflation. We determine the statistical predictions for observables by generating over 30,000 models with 2–100 fields supporting at least 60 efolds of inflation. These studies lead us to seven lessons: i) Manyfield inflation is not single-field inflation, ii) The larger the number of fields, the simpler and sharper the predictions, iii) Planck compatibility is not rare, but future experiments may rule out this class of models, iv) The smoother the potentials, the sharper the predictions, v) Hyperparameters can transition from stiff to sloppy, vi) Despite tachyons, isocurvature can decay, vii) Eigenvalue repulsion drives the predictions. We conclude that many of the `generic predictions' of single-field inflation can be emergent features of complex inflation models.
Estimation of genetic parameters related to eggshell strength using random regression models.
Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K
2015-01-01
This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.
Anomalous Anticipatory Responses in Networked Random Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Roger D.; Bancel, Peter A.
2006-10-16
We examine an 8-year archive of synchronized, parallel time series of random data from a world spanning network of physical random event generators (REGs). The archive is a publicly accessible matrix of normally distributed 200-bit sums recorded at 1 Hz which extends from August 1998 to the present. The primary question is whether these data show non-random structure associated with major events such as natural or man-made disasters, terrible accidents, or grand celebrations. Secondarily, we examine the time course of apparently correlated responses. Statistical analyses of the data reveal consistent evidence that events which strongly affect people engender small butmore » significant effects. These include suggestions of anticipatory responses in some cases, leading to a series of specialized analyses to assess possible non-random structure preceding precisely timed events. A focused examination of data collected around the time of earthquakes with Richter magnitude 6 and greater reveals non-random structure with a number of intriguing, potentially important features. Anomalous effects in the REG data are seen only when the corresponding earthquakes occur in populated areas. No structure is found if they occur in the oceans. We infer that an important contributor to the effect is the relevance of the earthquake to humans. Epoch averaging reveals evidence for changes in the data some hours prior to the main temblor, suggestive of reverse causation.« less
JOURNAL SCOPE GUIDELINES: Paper classification scheme
NASA Astrophysics Data System (ADS)
2005-06-01
This scheme is used to clarify the journal's scope and enable authors and readers to more easily locate the appropriate section for their work. For each of the sections listed in the scope statement we suggest some more detailed subject areas which help define that subject area. These lists are by no means exhaustive and are intended only as a guide to the type of papers we envisage appearing in each section. We acknowledge that no classification scheme can be perfect and that there are some papers which might be placed in more than one section. We are happy to provide further advice on paper classification to authors upon request (please email jphysa@iop.org). 1. Statistical physics numerical and computational methods statistical mechanics, phase transitions and critical phenomena quantum condensed matter theory Bose-Einstein condensation strongly correlated electron systems exactly solvable models in statistical mechanics lattice models, random walks and combinatorics field-theoretical models in statistical mechanics disordered systems, spin glasses and neural networks nonequilibrium systems network theory 2. Chaotic and complex systems nonlinear dynamics and classical chaos fractals and multifractals quantum chaos classical and quantum transport cellular automata granular systems and self-organization pattern formation biophysical models 3. Mathematical physics combinatorics algebraic structures and number theory matrix theory classical and quantum groups, symmetry and representation theory Lie algebras, special functions and orthogonal polynomials ordinary and partial differential equations difference and functional equations integrable systems soliton theory functional analysis and operator theory inverse problems geometry, differential geometry and topology numerical approximation and analysis geometric integration computational methods 4. Quantum mechanics and quantum information theory coherent states eigenvalue problems supersymmetric quantum mechanics scattering theory relativistic quantum mechanics semiclassical approximations foundations of quantum mechanics and measurement theory entanglement and quantum nonlocality geometric phases and quantum tomography quantum tunnelling decoherence and open systems quantum cryptography, communication and computation theoretical quantum optics 5. Classical and quantum field theory quantum field theory gauge and conformal field theory quantum electrodynamics and quantum chromodynamics Casimir effect integrable field theory random matrix theory applications in field theory string theory and its developments classical field theory and electromagnetism metamaterials 6. Fluid and plasma theory turbulence fundamental plasma physics kinetic theory magnetohydrodynamics and multifluid descriptions strongly coupled plasmas one-component plasmas non-neutral plasmas astrophysical and dusty plasmas
On the equilibrium state of a small system with random matrix coupling to its environment
NASA Astrophysics Data System (ADS)
Lebowitz, J. L.; Pastur, L.
2015-07-01
We consider a random matrix model of interaction between a small n-level system, S, and its environment, a N-level heat reservoir, R. The interaction between S and R is modeled by a tensor product of a fixed n× n matrix and a N× N Hermitian random matrix. We show that under certain ‘macroscopicity’ conditions on R, the reduced density matrix of the system {{ρ }S}=T{{r}R}ρ S\\cup R(eq), is given by ρ S(c)˜ exp \\{-β {{H}S}\\}, where HS is the Hamiltonian of the isolated system. This holds for all strengths of the interaction and thus gives some justification for using ρ S(c) to describe some nano-systems, like biopolymers, in equilibrium with their environment (Seifert 2012 Rep. Prog. Phys. 75 126001). Our results extend those obtained previously in (Lebowitz and Pastur 2004 J. Phys. A: Math. Gen. 37 1517-34) (Lebowitz et al 2007 Contemporary Mathematics (Providence RI: American Mathematical Society) pp 199-218) for a special two-level system.
Improved Estimation and Interpretation of Correlations in Neural Circuits
Yatsenko, Dimitri; Josić, Krešimir; Ecker, Alexander S.; Froudarakis, Emmanouil; Cotton, R. James; Tolias, Andreas S.
2015-01-01
Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150–350 cells) in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive ‘excitatory’ interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative ‘inhibitory’ interactions were less selective. Because of its superior performance, this ‘sparse+latent’ estimator likely provides a more physiologically relevant representation of the functional connectivity in densely sampled recordings than the sample correlation matrix. PMID:25826696
NASA Astrophysics Data System (ADS)
Avakyan, L. A.; Heinz, M.; Skidanenko, A. V.; Yablunovski, K. A.; Ihlemann, J.; Meinertz, J.; Patzig, C.; Dubiel, M.; Bugaev, L. A.
2018-01-01
The formation of a localized surface plasmon resonance (SPR) spectrum of randomly distributed gold nanoparticles in the surface layer of silicate float glass, generated and implanted by UV ArF-excimer laser irradiation of a thin gold layer sputter-coated on the glass surface, was studied by the T-matrix method, which enables particle agglomeration to be taken into account. The experimental technique used is promising for the production of submicron patterns of plasmonic nanoparticles (given by laser masks or gratings) without damage to the glass surface. Analysis of the applicability of the multi-spheres T-matrix (MSTM) method to the studied material was performed through calculations of SPR characteristics for differently arranged and structured gold nanoparticles (gold nanoparticles in solution, particles pairs, and core-shell silver-gold nanoparticles) for which either experimental data or results of the modeling by other methods are available. For the studied gold nanoparticles in glass, it was revealed that the theoretical description of their SPR spectrum requires consideration of the plasmon coupling between particles, which can be done effectively by MSTM calculations. The obtained statistical distributions over particle sizes and over interparticle distances demonstrated the saturation behavior with respect to the number of particles under consideration, which enabled us to determine the effective aggregate of particles, sufficient to form the SPR spectrum. The suggested technique for the fitting of an experimental SPR spectrum of gold nanoparticles in glass by varying the geometrical parameters of the particles aggregate in the recurring calculations of spectrum by MSTM method enabled us to determine statistical characteristics of the aggregate: the average distance between particles, average size, and size distribution of the particles. The fitting strategy of the SPR spectrum presented here can be applied to nanoparticles of any nature and in various substances, and, in principle, can be extended for particles with non-spherical shapes, like ellipsoids, rod-like and other T-matrix-solvable shapes.
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
NASA Astrophysics Data System (ADS)
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
Data-Driven Learning of Q-Matrix
Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2013-01-01
The recent surge of interests in cognitive assessment has led to developments of novel statistical models for diagnostic classification. Central to many such models is the well-known Q-matrix, which specifies the item–attribute relationships. This article proposes a data-driven approach to identification of the Q-matrix and estimation of related model parameters. A key ingredient is a flexible T-matrix that relates the Q-matrix to response patterns. The flexibility of the T-matrix allows the construction of a natural criterion function as well as a computationally amenable algorithm. Simulations results are presented to demonstrate usefulness and applicability of the proposed method. Extension to handling of the Q-matrix with partial information is presented. The proposed method also provides a platform on which important statistical issues, such as hypothesis testing and model selection, may be formally addressed. PMID:23926363
Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h
2010-11-01
In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.
Barros, Raquel R M; Novaes, Arthur B Júnior; Grisi, Márcio F M; Souza, Sérgio L S; Taba, Mário Júnior; Palioto, Daniela B
2004-10-01
The acellular dermal matrix graft (ADMG) has become widely used in periodontal surgeries as a substitute for the subepithelial connective tissue graft (SCTG). These grafts exhibit different healing processes due to their distinct cellular and vascular structures. Therefore the surgical technique primarily developed for the autograft may not be adequate for the allograft. This study compared the clinical results of two surgical techniques--the "conventional" and a modified procedure--for the treatment of localized gingival recessions with the ADMG. A total of 32 bilateral Miller Class I or II gingival recessions were selected and randomly assigned to test and control groups. The control group received the SCTG and the test group the modified surgical technique. Probing depth (PD), relative clinical attachment level (RCAL), gingival recession (GR), and width of keratinized tissue (KT) were measured 2 weeks prior to surgery and 6 months post-surgery. Both procedures improved all the evaluated parameters after 6 months. Comparisons between the groups by Mann-Whitney rank sum test revealed no statistically significant differences in terms of CAL gain, PD reduction, and increase in KT from baseline to 6-month evaluation. However, there was a statistically significant greater reduction of GR favoring the modified technique (P = 0.002). The percentage of root coverage was 79% for the test group and 63.9% for the control group. We conclude that the modified technique is more suitable for root coverage procedures with the ADMG since it had statistically significant better clinical results compared to the traditional technique.
On the efficiency of a randomized mirror descent algorithm in online optimization problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.
2015-04-01
A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.
Isehed, Catrine; Holmlund, Anders; Renvert, Stefan; Svenson, Björn; Johansson, Ingegerd; Lundberg, Pernilla
2016-10-01
This randomized clinical trial aimed at comparing radiological, clinical and microbial effects of surgical treatment of peri-implantitis alone or in combination with enamel matrix derivative (EMD). Twenty-six subjects were treated with open flap debridement and decontamination of the implant surfaces with gauze and saline preceding adjunctive EMD or no EMD. Bone level (BL) change was primary outcome and secondary outcomes were changes in pocket depth (PD), plaque, pus, bleeding and the microbiota of the peri-implant biofilm analyzed by the Human Oral Microbe Identification Microarray over a time period of 12 months. In multivariate modelling, increased marginal BL at implant site was significantly associated with EMD, the number of osseous walls in the peri-implant bone defect and a Gram+/aerobic microbial flora, whereas reduced BL was associated with a Gram-/anaerobic microbial flora and presence of bleeding and pus, with a cross-validated predictive capacity (Q(2) ) of 36.4%. Similar, but statistically non-significant, trends were seen for BL, PD, plaque, pus and bleeding in univariate analysis. Adjunctive EMD to surgical treatment of peri-implantitis was associated with prevalence of Gram+/aerobic bacteria during the follow-up period and increased marginal BL 12 months after treatment. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Valid statistical inference methods for a case-control study with missing data.
Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun
2018-04-01
The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Telfeyan, Katherine Christina; Ware, Stuart Douglas; Reimus, Paul William
Diffusion cell and diffusion wafer experiments were conducted to compare methods for estimating matrix diffusion coefficients in rock core samples from Pahute Mesa at the Nevada Nuclear Security Site (NNSS). A diffusion wafer method, in which a solute diffuses out of a rock matrix that is pre-saturated with water containing the solute, is presented as a simpler alternative to the traditional through-diffusion (diffusion cell) method. Both methods yielded estimates of matrix diffusion coefficients that were within the range of values previously reported for NNSS volcanic rocks. The difference between the estimates of the two methods ranged from 14 to 30%,more » and there was no systematic high or low bias of one method relative to the other. From a transport modeling perspective, these differences are relatively minor when one considers that other variables (e.g., fracture apertures, fracture spacings) influence matrix diffusion to a greater degree and tend to have greater uncertainty than diffusion coefficients. For the same relative random errors in concentration measurements, the diffusion cell method yields diffusion coefficient estimates that have less uncertainty than the wafer method. However, the wafer method is easier and less costly to implement and yields estimates more quickly, thus allowing a greater number of samples to be analyzed for the same cost and time. Given the relatively good agreement between the methods, and the lack of any apparent bias between the methods, the diffusion wafer method appears to offer advantages over the diffusion cell method if better statistical representation of a given set of rock samples is desired.« less
Polarization-interference Jones-matrix mapping of biological crystal networks
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Dubolazov, O. V.; Pidkamin, L. Y.; Sidor, M. I.; Pavlyukovich, N.; Pavlyukovich, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of Jones-matrix mapping with the help of reference wave. It was provided experimentally measured coordinate distributions of modulus of Jones-matrix elements of polycrystalline film of bile. It was defined the values and ranges of changing of statistic moments, which characterize such distributions. The second part presents the data of statistic analysis of the distributions of matrix elements of polycrystalline film of urine of donors and patients with albuminuria. It was defined the objective criteria of differentiation of albuminuria.
Group identification in Indonesian stock market
NASA Astrophysics Data System (ADS)
Nurriyadi Suparno, Ervano; Jo, Sung Kyun; Lim, Kyuseong; Purqon, Acep; Kim, Soo Yong
2016-08-01
The characteristic of Indonesian stock market is interesting especially because it represents developing countries. We investigate the dynamics and structures by using Random Matrix Theory (RMT). Here, we analyze the cross-correlation of the fluctuations of the daily closing price of stocks from the Indonesian Stock Exchange (IDX) between January 1, 2007, and October 28, 2014. The eigenvalue distribution of the correlation matrix consists of noise which is filtered out using the random matrix as a control. The bulk of the eigenvalue distribution conforms to the random matrix, allowing the separation of random noise from original data which is the deviating eigenvalues. From the deviating eigenvalues and the corresponding eigenvectors, we identify the intrinsic normal modes of the system and interpret their meaning based on qualitative and quantitative approach. The results show that the largest eigenvector represents the market-wide effect which has a predominantly common influence toward all stocks. The other eigenvectors represent highly correlated groups within the system. Furthermore, identification of the largest components of the eigenvectors shows the sector or background of the correlated groups. Interestingly, the result shows that there are mainly two clusters within IDX, natural and non-natural resource companies. We then decompose the correlation matrix to investigate the contribution of the correlated groups to the total correlation, and we find that IDX is still driven mainly by the market-wide effect.
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
Estimation of regionalized compositions: A comparison of three methods
Pawlowsky, V.; Olea, R.A.; Davis, J.C.
1995-01-01
A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.
Fault Diagnosis Strategies for SOFC-Based Power Generation Plants
Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea
2016-01-01
The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472
Applying Triple-Matrix Masking for Privacy Preserving Data Collection and Sharing in HIV Studies.
Pei, Qinglin; Chen, Shigang; Xiao, Yao; Wu, Samuel S
2016-01-01
Many HIV research projects are plagued by the high missing rate of selfreported information during data collection. Also, due to the sensitive nature of the HIV research data, privacy protection is always a concern for data sharing in HIV studies. This paper applies a data masking approach, called triple-matrix masking [1], to the context of HIV research for ensuring privacy protection during the process of data collection and data sharing. Using a set of generated HIV patient data, we show step by step how the data are randomly transformed (masked) before leaving the patients' individual data collection device (which ensures that nobody sees the actual data) and how the masked data are further transformed by a masking service provider and a data collector. We demonstrate that the masked data retain statistical utility of the original data, yielding the exactly same inference results in the planned logistic regression on the effect of age on the adherence to antiretroviral therapy and in the Cox proportional hazard model for the age effect on time to viral load suppression. Privacy-preserving data collection method may help resolve the privacy protection issue in HIV research. The individual sensitive data can be completely hidden while the same inference results can still be obtained from the masked data, with the use of common statistical analysis methods.
Eigenvalue density of cross-correlations in Sri Lankan financial market
NASA Astrophysics Data System (ADS)
Nilantha, K. G. D. R.; Ranasinghe; Malmini, P. K. C.
2007-05-01
We apply the universal properties with Gaussian orthogonal ensemble (GOE) of random matrices namely spectral properties, distribution of eigenvalues, eigenvalue spacing predicted by random matrix theory (RMT) to compare cross-correlation matrix estimators from emerging market data. The daily stock prices of the Sri Lankan All share price index and Milanka price index from August 2004 to March 2005 were analyzed. Most eigenvalues in the spectrum of the cross-correlation matrix of stock price changes agree with the universal predictions of RMT. We find that the cross-correlation matrix satisfies the universal properties of the GOE of real symmetric random matrices. The eigen distribution follows the RMT predictions in the bulk but there are some deviations at the large eigenvalues. The nearest-neighbor spacing and the next nearest-neighbor spacing of the eigenvalues were examined and found that they follow the universality of GOE. RMT with deterministic correlations found that each eigenvalue from deterministic correlations is observed at values, which are repelled from the bulk distribution.
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
NASA Astrophysics Data System (ADS)
Pato, Mauricio P.; Oshanin, Gleb
2013-03-01
We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.
Easy way to determine quantitative spatial resolution distribution for a general inverse problem
NASA Astrophysics Data System (ADS)
An, M.; Feng, M.
2013-12-01
The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.
Weber, Stephen C; Kauffman, Jeffrey I; Parise, Carol; Weber, Sophia J; Katz, Stephen D
2013-02-01
Arthroscopic rotator cuff repair has a high rate of patient satisfaction. However, multiple studies have shown significant rates of anatomic failure. Biological augmentation would seem to be a reasonable technique to improve clinical outcomes and healing rates. To represent a prospective, double-blinded, randomized study to assess the use of platelet-rich fibrin matrix (PRFM) in rotator cuff surgery. Randomized controlled trial; level of evidence, 1. Prestudy power analysis demonstrated that a sample size of 30 patients in each group (PRFM vs control) would allow recognition of a 20% difference in perioperative pain scores. Sixty consecutive patients were randomized to either receive a commercially available PRFM product or not. Preoperative and postoperative range of motion (ROM), University of California-Los Angeles (UCLA), and simple shoulder test (SST) scores were recorded. Surgery was performed using an arthroscopic single-row technique. Visual analog scale (VAS) pain scores were obtained upon arrival to the recovery room and 1 hour postoperatively, and narcotic consumption was recorded and converted to standard narcotic equivalents. The SST and ROM measurements were taken at 3, 6, 9, and 12 weeks postoperatively, and final (1 year) American shoulder and elbow surgeons (ASES) shoulder and UCLA shoulder scores were assessed. There were no complications. Randomization created comparable groups except that the PRFM group was younger than the control group (mean ± SD, 59.67 ± 8.16 y vs 64.50 ± 8.59 y, respectively; P < .05). Mean surgery time was longer for the PRFM group than for the control group (83.28 ± 17.13 min vs 73.28 ± 17.18 min, respectively; P < .02). There was no significant difference in VAS scores or narcotic use between groups and no statistically significant differences in recovery of motion, SST, or ASES scores. Mean ASES scores were 82.48 ± 8.77 (PRFM group) and 82.52 ± 12.45 (controls) (F(1,56) = 0.00, P > .98). Mean UCLA shoulder scores were 27.94 ± 4.98 for the PRFM group versus 29.59 ± 1.68 for the controls (P < .046). Structural results correlated with age and size of the tear and did not differ between the groups. Platelet-rich fibrin matrix was not shown to significantly improve perioperative morbidity, clinical outcomes, or structural integrity. While longer term follow-up or different platelet-rich plasma formulations may show differences, early follow-up does not show significant improvement in perioperative morbidity, structural integrity, or clinical outcome.
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
An efficient voting algorithm for finding additive biclusters with random background.
Xiao, Jing; Wang, Lusheng; Liu, Xiaowen; Jiang, Tao
2008-12-01
The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n x m matrix A (n >or= m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n x m background matrix is a random integer from [0, L - 1] for some integer L, and a k x k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L - 1] with probability theta. We propose an O(n(2)m) time algorithm based on voting to solve the problem. We show that when k >or= Omega(square root of (n log n)), the voting algorithm can correctly find the implanted bicluster with probability at least 1 - (9/n(2)). We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with overlapping implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed.
Random matrix theory and portfolio optimization in Moroccan stock exchange
NASA Astrophysics Data System (ADS)
El Alaoui, Marwane
2015-09-01
In this work, we use random matrix theory to analyze eigenvalues and see if there is a presence of pertinent information by using Marčenko-Pastur distribution. Thus, we study cross-correlation among stocks of Casablanca Stock Exchange. Moreover, we clean correlation matrix from noisy elements to see if the gap between predicted risk and realized risk would be reduced. We also analyze eigenvectors components distributions and their degree of deviations by computing the inverse participation ratio. This analysis is a way to understand the correlation structure among stocks of Casablanca Stock Exchange portfolio.
Electrochemical Test Method for Evaluating Long-Term Propellant-Material Compatibility
1978-12-01
matrix of test conditions is illustrated in Fig. 13. A statistically designed test matrix (Graeco-Latin Cube) could not be used because of passivation...ears simulated time results in a findl decomposition level of 0.753 mg/cm The data was examined using statistical techniqves to evaluate the relative...metals. The compatibility of all nine metals was evaluated in hydrazine containing water and chloride. The results of the statistical analy(is
Remote sensing of earth terrain
NASA Technical Reports Server (NTRS)
Kong, J. A.
1988-01-01
Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
Measurement Matrix Design for Phase Retrieval Based on Mutual Information
NASA Astrophysics Data System (ADS)
Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.
2018-01-01
In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
NASA Astrophysics Data System (ADS)
Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2017-12-01
We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0<α ≤slant 2 . We deduce probability-generating functions (network Green’s functions) for the fractional random walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0<α< 1 the fractional random walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα < 2 for dimensions d≥slant 2 . Finally, for α=2 , Polya’s classical recurrence theorem is recovered, namely the walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0<α<1 closed form expressions for the fractional lattice Green’s function matrix containing the escape and ever passage probabilities. The ever passage probabilities (fractional lattice Green’s functions) in the transient regime fulfil Riesz potential power law decay asymptotic behavior for nodes far from the departure node. The non-locality of the fractional random walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.
SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.
Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S
2012-06-01
With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.
Portfolio optimization and the random magnet problem
NASA Astrophysics Data System (ADS)
Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.
2002-08-01
Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.
Quantum signature of chaos and thermalization in the kicked Dicke model
NASA Astrophysics Data System (ADS)
Ray, S.; Ghosh, A.; Sinha, S.
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
NASA Astrophysics Data System (ADS)
Goyal, Sandeep K.; Singh, Rajeev; Ghosh, Sibasish
2016-01-01
Mixed states of a quantum system, represented by density operators, can be decomposed as a statistical mixture of pure states in a number of ways where each decomposition can be viewed as a different preparation recipe. However the fact that the density matrix contains full information about the ensemble makes it impossible to estimate the preparation basis for the quantum system. Here we present a measurement scheme to (seemingly) improve the performance of unsharp measurements. We argue that in some situations this scheme is capable of providing statistics from a single copy of the quantum system, thus making it possible to perform state tomography from a single copy. One of the by-products of the scheme is a way to distinguish between different preparation methods used to prepare the state of the quantum system. However, our numerical simulations disagree with our intuitive predictions. We show that a counterintuitive property of a biased classical random walk is responsible for the proposed mechanism not working.
Quantum signature of chaos and thermalization in the kicked Dicke model.
Ray, S; Ghosh, A; Sinha, S
2016-09-01
We study the quantum dynamics of the kicked Dicke model (KDM) in terms of the Floquet operator, and we analyze the connection between chaos and thermalization in this context. The Hamiltonian map is constructed by suitably taking the classical limit of the Heisenberg equation of motion to study the corresponding phase-space dynamics, which shows a crossover from regular to chaotic motion by tuning the kicking strength. The fixed-point analysis and calculation of the Lyapunov exponent (LE) provide us with a complete picture of the onset of chaos in phase-space dynamics. We carry out a spectral analysis of the Floquet operator, which includes a calculation of the quasienergy spacing distribution and structural entropy to show the correspondence to the random matrix theory in the chaotic regime. Finally, we analyze the thermodynamics and statistical properties of the bosonic sector as well as the spin sector, and we discuss how such a periodically kicked system relaxes to a thermalized state in accordance with the laws of statistical mechanics.
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
On the statistical mechanics of the 2D stochastic Euler equation
NASA Astrophysics Data System (ADS)
Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg
2011-12-01
The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Avendaño, Carlos G.; Reyes, Arturo
2017-03-01
We theoretically study the dispersion relation for axially propagating electromagnetic waves throughout a one-dimensional helical structure whose pitch and dielectric and magnetic properties are spatial random functions with specific statistical characteristics. In the system of coordinates rotating with the helix, by using a matrix formalism, we write the set of differential equations that governs the expected value of the electromagnetic field amplitudes and we obtain the corresponding dispersion relation. We show that the dispersion relation depends strongly on the noise intensity introduced in the system and the autocorrelation length. When the autocorrelation length increases at fixed fluctuation and when the fluctuation augments at fixed autocorrelation length, the band gap widens and the attenuation coefficient of electromagnetic waves propagating in the random medium gets larger. By virtue of the degeneracy in the imaginary part of the eigenvalues associated with the propagating modes, the random medium acts as a filter for circularly polarized electromagnetic waves, in which only the propagating backward circularly polarized wave can propagate with no attenuation. Our results are valid for any kind of dielectric and magnetic structures which possess a helical-like symmetry such as cholesteric and chiral smectic-C liquid crystals, structurally chiral materials, and stressed cholesteric elastomers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.
2012-07-01
In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Girolami, M.
2014-11-01
We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.
Universal shocks in the Wishart random-matrix ensemble.
Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr
2013-05-01
We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.
Statistical Methods in Ai: Rare Event Learning Using Associative Rules and Higher-Order Statistics
NASA Astrophysics Data System (ADS)
Iyer, V.; Shetty, S.; Iyengar, S. S.
2015-07-01
Rare event learning has not been actively researched since lately due to the unavailability of algorithms which deal with big samples. The research addresses spatio-temporal streams from multi-resolution sensors to find actionable items from a perspective of real-time algorithms. This computing framework is independent of the number of input samples, application domain, labelled or label-less streams. A sampling overlap algorithm such as Brooks-Iyengar is used for dealing with noisy sensor streams. We extend the existing noise pre-processing algorithms using Data-Cleaning trees. Pre-processing using ensemble of trees using bagging and multi-target regression showed robustness to random noise and missing data. As spatio-temporal streams are highly statistically correlated, we prove that a temporal window based sampling from sensor data streams converges after n samples using Hoeffding bounds. Which can be used for fast prediction of new samples in real-time. The Data-cleaning tree model uses a nonparametric node splitting technique, which can be learned in an iterative way which scales linearly in memory consumption for any size input stream. The improved task based ensemble extraction is compared with non-linear computation models using various SVM kernels for speed and accuracy. We show using empirical datasets the explicit rule learning computation is linear in time and is only dependent on the number of leafs present in the tree ensemble. The use of unpruned trees (t) in our proposed ensemble always yields minimum number (m) of leafs keeping pre-processing computation to n × t log m compared to N2 for Gram Matrix. We also show that the task based feature induction yields higher Qualify of Data (QoD) in the feature space compared to kernel methods using Gram Matrix.
Misiak, Piotr; Terlecki, Artur; Rzepkowska-Misiak, Beata; Wcisło, Szymon; Brocki, Marian
2014-02-01
Ingrowing nail also known as onychocryptosis is a common health problem. This disease mostly affects young people, often carrying a considerable amount of socio-economic implications. It's foot problem that usually manifests as inflammation of tissue along the side of a toenail. The aim of the study was to asses and to compare effectiveness of electrocautery and phenol application in partial matrixectomy after partial nail extraction in the treatment of ingrown toenails. The group of 60 patients with ingrowing toenail which was randomized into two groups underwent partial matrixectomy in surgical outpatient clinic between 2009-2013. This group of patients was under surgical observation for 100 days in outpatient clinic. In all operated patients we obtained surgical success however we had 13 recurrences during the follow up period, 5 in the phenolization group and 8 in the electrocoagulation group. There was statistically significant difference between these two techniques, which indicated that matrix phenolization is connected with shortened healing time vs the matrix electrocoagulation.
Convergence to equilibrium under a random Hamiltonian.
Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Convergence to equilibrium under a random Hamiltonian
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing
Matochko, Wadim L.; Derda, Ratmir
2013-01-01
Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071
Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics
NASA Technical Reports Server (NTRS)
Pohorille, Andrew
2006-01-01
The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted
ERIC Educational Resources Information Center
Poon, Wai-Yin; Wong, Yuen-Kwan
2004-01-01
This study uses a Cook's distance type diagnostic statistic to identify unusual observations in a data set that unduly influence the estimation of a covariance matrix. Similar to many other deletion-type diagnostic statistics, this proposed measure is susceptible to masking or swamping effect in the presence of several unusual observations. In…
Bi-dimensional null model analysis of presence-absence binary matrices.
Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J
2018-01-01
Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
The Performance Analysis Based on SAR Sample Covariance Matrix
Erten, Esra
2012-01-01
Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976
Mueller matrix mapping of biological polycrystalline layers using reference wave
NASA Astrophysics Data System (ADS)
Dubolazov, A.; Ushenko, O. G.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Grytsyuk, M.; Prysyazhnyuk, P. V.
2018-01-01
The paper consists of two parts. The first part is devoted to the short theoretical basics of the method of differential Mueller-matrix description of properties of partially depolarizing layers. It was provided the experimentally measured maps of differential matrix of the 1st order of polycrystalline structure of the histological section of brain tissue. It was defined the statistical moments of the 1st-4th orders, which characterize the distribution of matrix elements. In the second part of the paper it was provided the data of statistic analysis of birefringence and dichroism of the histological sections of mice liver tissue (normal and with diabetes). It were defined the objective criteria of differential diagnostics of diabetes.
Fushiki, Tadayoshi
2009-07-01
The correlation matrix is a fundamental statistic that is used in many fields. For example, GroupLens, a collaborative filtering system, uses the correlation between users for predictive purposes. Since the correlation is a natural similarity measure between users, the correlation matrix may be used in the Gram matrix in kernel methods. However, the estimated correlation matrix sometimes has a serious defect: although the correlation matrix is originally positive semidefinite, the estimated one may not be positive semidefinite when not all ratings are observed. To obtain a positive semidefinite correlation matrix, the nearest correlation matrix problem has recently been studied in the fields of numerical analysis and optimization. However, statistical properties are not explicitly used in such studies. To obtain a positive semidefinite correlation matrix, we assume the approximate model. By using the model, an estimate is obtained as the optimal point of an optimization problem formulated with information on the variances of the estimated correlation coefficients. The problem is solved by a convex quadratic semidefinite program. A penalized likelihood approach is also examined. The MovieLens data set is used to test our approach.
Verification of unfold error estimates in the unfold operator code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less
Averages of ratios of the Riemann zeta-function and correlations of divisor sums
NASA Astrophysics Data System (ADS)
Conrey, Brian; Keating, Jonathan P.
2017-10-01
Nonlinearity has published articles containing a significant number-theoretic component since the journal was first established. We examine one thread, concerning the statistics of the zeros of the Riemann zeta function. We extend this by establishing a connection between the ratios conjecture for the Riemann zeta-function and a conjecture concerning correlations of convolutions of Möbius and divisor functions. Specifically, we prove that the ratios conjecture and an arithmetic correlations conjecture imply the same result. This provides new support for the ratios conjecture, which previously had been motivated by analogy with formulae in random matrix theory and by a heuristic recipe. Our main theorem generalises a recent calculation pertaining to the special case of two-over-two ratios.
Aspects géométriques et intégrables des modèles de matrices aléatoires
NASA Astrophysics Data System (ADS)
Marchal, Olivier
2010-12-01
This thesis deals with the geometric and integrable aspects associated with random matrix models. Its purpose is to provide various applications of random matrix theory, from algebraic geometry to partial differential equations of integrable systems. The variety of these applications shows why matrix models are important from a mathematical point of view. First, the thesis will focus on the study of the merging of two intervals of the eigenvalues density near a singular point. Specifically, we will show why this special limit gives universal equations from the Painlevé II hierarchy of integrable systems theory. Then, following the approach of (bi) orthogonal polynomials introduced by Mehta to compute partition functions, we will find Riemann-Hilbert and isomonodromic problems connected to matrix models, making the link with the theory of Jimbo, Miwa and Ueno. In particular, we will describe how the hermitian two-matrix models provide a degenerate case of Jimbo-Miwa-Ueno's theory that we will generalize in this context. Furthermore, the loop equations method, with its central notions of spectral curve and topological expansion, will lead to the symplectic invariants of algebraic geometry recently proposed by Eynard and Orantin. This last point will be generalized to the case of non-hermitian matrix models (arbitrary beta) paving the way to "quantum algebraic geometry" and to the generalization of symplectic invariants to "quantum curves". Finally, this set up will be applied to combinatorics in the context of topological string theory, with the explicit computation of an hermitian random matrix model enumerating the Gromov-Witten invariants of a toric Calabi-Yau threefold.
ERIC Educational Resources Information Center
Larwin, Karen H.; Larwin, David A.
2011-01-01
Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…
NASA Astrophysics Data System (ADS)
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Derivatives of random matrix characteristic polynomials with applications to elliptic curves
NASA Astrophysics Data System (ADS)
Snaith, N. C.
2005-12-01
The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.
Complex Langevin simulation of a random matrix model at nonzero chemical potential
NASA Astrophysics Data System (ADS)
Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.
2018-03-01
In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.
Finite-time stability of neutral-type neural networks with random time-varying delays
NASA Astrophysics Data System (ADS)
Ali, M. Syed; Saravanan, S.; Zhu, Quanxin
2017-11-01
This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.
Counting statistics of chaotic resonances at optical frequencies: Theory and experiments
NASA Astrophysics Data System (ADS)
Lippolis, Domenico; Wang, Li; Xiao, Yun-Feng
2017-07-01
A deformed dielectric microcavity is used as an experimental platform for the analysis of the statistics of chaotic resonances, in the perspective of testing fractal Weyl laws at optical frequencies. In order to surmount the difficulties that arise from reading strongly overlapping spectra, we exploit the mixed nature of the phase space at hand, and only count the high-Q whispering-gallery modes (WGMs) directly. That enables us to draw statistical information on the more lossy chaotic resonances, coupled to the high-Q regular modes via dynamical tunneling. Three different models [classical, Random-Matrix-Theory (RMT) based, semiclassical] to interpret the experimental data are discussed. On the basis of least-squares analysis, theoretical estimates of Ehrenfest time, and independent measurements, we find that a semiclassically modified RMT-based expression best describes the experiment in all its realizations, particularly when the resonator is coupled to visible light, while RMT alone still works quite well in the infrared. In this work we reexamine and substantially extend the results of a short paper published earlier [L. Wang et al., Phys. Rev. E 93, 040201(R) (2016), 10.1103/PhysRevE.93.040201].
NASA Astrophysics Data System (ADS)
Shvartsburg, Alexandre A.; Siu, K. W. Michael
2001-06-01
Modeling the delayed dissociation of clusters had been over the last decade a frontline development area in chemical physics. It is of fundamental interest how statistical kinetics methods previously validated for regular molecules and atomic nuclei may apply to clusters, as this would help to understand the transferability of statistical models for disintegration of complex systems across various classes of physical objects. From a practical perspective, accurate simulation of unimolecular decomposition is critical for the extraction of true thermochemical values from measurements on the decay of energized clusters. Metal clusters are particularly challenging because of the multitude of low-lying electronic states that are coupled to vibrations. This has previously been accounted for assuming the average electronic structure of a conducting cluster approximated by the levels of electron in a cavity. While this provides a reasonable time-averaged description, it ignores the distribution of instantaneous electronic structures in a "boiling" cluster around that average. Here we set up a new treatment that incorporates the statistical distribution of electronic levels around the average picture using random matrix theory. This approach faithfully reflects the completely chaotic "vibronic soup" nature of hot metal clusters. We found that the consideration of electronic level statistics significantly promotes electronic excitation and thus increases the magnitude of its effect. As this excitation always depresses the decay rates, the inclusion of level statistics results in slower dissociation of metal clusters.
Time series, correlation matrices and random matrix models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinayak; Seligman, Thomas H.
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less
Gupta, Swyeta Jain; Jhingran, Rajesh; Gupta, Vivek; Bains, Vivek Kumar; Madan, Rohit; Rizvi, Iram
2014-07-01
To evaluate and compare the efficacy of platelet-rich fibrin (PRF) with enamel matrix derivative (EMD; Emdogain) in the treatment of periodontal intrabony defects in patients with chronic periodontitis, six months after surgery. Forty-four (44) intrabony defects in 30 patients (15 males) were randomly allocated into two treatment groups: EMD (n = 22) and PRF (n = 22). Measurement of the defects was done using clinical and cone beam computed tomography at baseline and 6 months. Clinical and radiographic parameters such as probing depth, clinical attachment level, intrabony defect depth and defect angle, were recorded at baseline and 6 months post-operatively. Within group change was evaluated using the Wilcoxon signed rank test. Intergroup comparisons were made using the Mann-Whitney U test. Postsurgical measurements revealed that there was an equal reduction in probing depth and a greater but statistically non-significant attachment gain for the Emdogain group when compared to the platelet-rich fibrin group. The Emdogain group presented with significantly greater percentage defect resolution (43.07% ± 12.21) than did the platelet-rich fibrin group (32.41% ± 14.61). Post-operatively the changes in defect width and defect angle were significant in both groups, but upon intergroup comparison they were found to be statistically non-significantly different. Both Emdogain and platelet-rich fibrin were effective in the regeneration of intrabony defects. Emdogain was significantly superior in terms of percentage defect resolution.
A note on variance estimation in random effects meta-regression.
Sidik, Kurex; Jonkman, Jeffrey N
2005-01-01
For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.
NASA Astrophysics Data System (ADS)
Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.
2018-05-01
A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
NASA Astrophysics Data System (ADS)
Ushenko, V. O.; Koval, G. D.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Vanchuliak, O.; Motrich, A. V.; Gorsky, M. P.; Meglinskiy, I.
2017-09-01
The paper presents the results of Jones-matrix mapping of uterine wall histological sections with second-degree and third-degree endometriosis. The technique of experimental measurement of coordinate distributions of the modulus and phase values of Jones matrix elements is suggested. Within the statistical and cross-correlation approaches the modulus and phase maps of Jones matrix images of optically thin biological layers of polycrystalline films of plasma and cerebrospinal fluid are analyzed. A set of objective parameters (statistical and generalized correlation moments), which are the most sensitive to changes in the phase of anisotropy, associated with the features of polycrystalline structure of uterine wall histological sections with second-degree and third-degree endometriosis are determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faessler, Amand; Rodin, V.; Fogli, G. L.
2009-03-01
The variances and covariances associated to the nuclear matrix elements of neutrinoless double beta decay (0{nu}{beta}{beta}) are estimated within the quasiparticle random phase approximation. It is shown that correlated nuclear matrix elements uncertainties play an important role in the comparison of 0{nu}{beta}{beta} decay rates for different nuclei, and that they are degenerate with the uncertainty in the reconstructed Majorana neutrino mass.
Localized motion in random matrix decomposition of complex financial systems
NASA Astrophysics Data System (ADS)
Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian
2017-04-01
With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.
Comparing the structure of an emerging market with a mature one under global perturbation
NASA Astrophysics Data System (ADS)
Namaki, A.; Jafari, G. R.; Raei, R.
2011-09-01
In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.
Evaluation of different rotary devices on bone repair in rabbits.
Ribeiro Junior, Paulo Domingos; Barleto, Christiane Vespasiano; Ribeiro, Daniel Araki; Matsumoto, Mariza Akemi
2007-01-01
In oral surgery, the quality of bone repair may be influenced by several factors that can increase the morbidity of the procedure. The type of equipment used for ostectomy can directly affect bone healing. The aim of this study was to evaluate bone repair of mandible bone defects prepared in rabbits using three different rotary devices. Fifteen New Zealand rabbits were randomly assigned to 3 groups (n=5) according to type of rotary device used to create bone defects: I--pneumatic low-speed rotation engine, II--pneumatic high-speed rotation engine, and III--electric low-speed rotation engine. The anatomic pieces were surgically obtained after 2, 7 and 30 days and submitted to histological and morphometric analysis. The morphometric results were expressed as the total area of bone remodeling matrix using an image analysis system. Increases in the bone remodeling matrix were noticed with time along the course of the experiment. No statistically significant differences (p>0.05) were observed among the groups at the three sacrificing time points considering the total area of bone mineralized matrix, although the histological analysis showed a slightly advanced bone repair in group III compared to the other two groups. The findings of the present study suggest that the type of rotary device used in oral and maxillofacial surgery does not interfere with the bone repair process.
Mohebifar, Rafat; Hasani, Hana; Barikani, Ameneh; Rafiei, Sima
2016-08-01
Providing high service quality is one of the main functions of health systems. Measuring service quality is the basic prerequisite for improving quality. The aim of this study was to evaluate the quality of service in teaching hospitals using importance-performance analysis matrix. A descriptive-analytic study was conducted through a cross-sectional method in six academic hospitals of Qazvin, Iran, in 2012. A total of 360 patients contributed to the study. The sampling technique was stratified random sampling. Required data were collected based on a standard questionnaire (SERVQUAL). Data analysis was done through SPSS version 18 statistical software and importance-performance analysis matrix. The results showed a significant gap between importance and performance in all five dimensions of service quality (p < 0.05). In reviewing the gap, "reliability" (2.36) and "assurance" (2.24) dimensions had the highest quality gap and "responsiveness" had the lowest gap (1.97). Also, according to findings, reliability and assurance were in Quadrant (I), empathy was in Quadrant (II), and tangibles and responsiveness were in Quadrant (IV) of the importance-performance matrix. The negative gap in all dimensions of quality shows that quality improvement is necessary in all dimensions. Using quality and diagnosis measurement instruments such as importance-performance analysis will help hospital managers with planning of service quality improvement and achieving long-term goals.
Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.
2016-01-01
The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
Record statistics of a strongly correlated time series: random walks and Lévy flights
NASA Astrophysics Data System (ADS)
Godrèche, Claude; Majumdar, Satya N.; Schehr, Grégory
2017-08-01
We review recent advances on the record statistics of strongly correlated time series, whose entries denote the positions of a random walk or a Lévy flight on a line. After a brief survey of the theory of records for independent and identically distributed random variables, we focus on random walks. During the last few years, it was indeed realized that random walks are a very useful ‘laboratory’ to test the effects of correlations on the record statistics. We start with the simple one-dimensional random walk with symmetric jumps (both continuous and discrete) and discuss in detail the statistics of the number of records, as well as of the ages of the records, i.e. the lapses of time between two successive record breaking events. Then we review the results that were obtained for a wide variety of random walk models, including random walks with a linear drift, continuous time random walks, constrained random walks (like the random walk bridge) and the case of multiple independent random walkers. Finally, we discuss further observables related to records, like the record increments, as well as some questions raised by physical applications of record statistics, like the effects of measurement error and noise.
Stability and dynamical properties of material flow systems on random networks
NASA Astrophysics Data System (ADS)
Anand, K.; Galla, T.
2009-04-01
The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.
Inflation with a graceful exit in a random landscape
NASA Astrophysics Data System (ADS)
Pedro, F. G.; Westphal, A.
2017-03-01
We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.
Correlation and volatility in an Indian stock market: A random matrix approach
NASA Astrophysics Data System (ADS)
Kulkarni, Varsha; Deo, Nivedita
2007-11-01
We examine the volatility of an Indian stock market in terms of correlation of stocks and quantify the volatility using the random matrix approach. First we discuss trends observed in the pattern of stock prices in the Bombay Stock Exchange for the three-year period 2000 2002. Random matrix analysis is then applied to study the relationship between the coupling of stocks and volatility. The study uses daily returns of 70 stocks for successive time windows of length 85 days for the year 2001. We compare the properties of matrix C of correlations between price fluctuations in time regimes characterized by different volatilities. Our analyses reveal that (i) the largest (deviating) eigenvalue of C correlates highly with the volatility of the index, (ii) there is a shift in the distribution of the components of the eigenvector corresponding to the largest eigenvalue across regimes of different volatilities, (iii) the inverse participation ratio for this eigenvector anti-correlates significantly with the market fluctuations and finally, (iv) this eigenvector of C can be used to set up a Correlation Index, CI whose temporal evolution is significantly correlated with the volatility of the overall market index.
On statistical independence of a contingency matrix
NASA Astrophysics Data System (ADS)
Tsumoto, Shusaku; Hirano, Shoji
2005-03-01
A contingency table summarizes the conditional frequencies of two attributes and shows how these two attributes are dependent on each other with the information on a partition of universe generated by these attributes. Thus, this table can be viewed as a relation between two attributes with respect to information granularity. This paper focuses on several characteristics of linear and statistical independence in a contingency table from the viewpoint of granular computing, which shows that statistical independence in a contingency table is a special form of linear dependence. The discussions also show that when a contingency table is viewed as a matrix, called a contingency matrix, its rank is equal to 1.0. Thus, the degree of independence, rank plays a very important role in extracting a probabilistic model from a given contingency table. Furthermore, it is found that in some cases, partial rows or columns will satisfy the condition of statistical independence, which can be viewed as a solving process of Diophatine equations.
Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers
NASA Astrophysics Data System (ADS)
Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos
2017-01-01
We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.
Quantum chaos in ultracold collisions of gas-phase erbium atoms.
Frisch, Albert; Mark, Michael; Aikawa, Kiyotaka; Ferlaino, Francesca; Bohn, John L; Makrides, Constantinos; Petrov, Alexander; Kotochigova, Svetlana
2014-03-27
Atomic and molecular samples reduced to temperatures below one microkelvin, yet still in the gas phase, afford unprecedented energy resolution in probing and manipulating the interactions between their constituent particles. As a result of this resolution, atoms can be made to scatter resonantly on demand, through the precise control of a magnetic field. For simple atoms, such as alkalis, scattering resonances are extremely well characterized. However, ultracold physics is now poised to enter a new regime, where much more complex species can be cooled and studied, including magnetic lanthanide atoms and even molecules. For molecules, it has been speculated that a dense set of resonances in ultracold collision cross-sections will probably exhibit essentially random fluctuations, much as the observed energy spectra of nuclear scattering do. According to the Bohigas-Giannoni-Schmit conjecture, such fluctuations would imply chaotic dynamics of the underlying classical motion driving the collision. This would necessitate new ways of looking at the fundamental interactions in ultracold atomic and molecular systems, as well as perhaps new chaos-driven states of ultracold matter. Here we describe the experimental demonstration that random spectra are indeed found at ultralow temperatures. In the experiment, an ultracold gas of erbium atoms is shown to exhibit many Fano-Feshbach resonances, of the order of three per gauss for bosons. Analysis of their statistics verifies that their distribution of nearest-neighbour spacings is what one would expect from random matrix theory. The density and statistics of these resonances are explained by fully quantum mechanical scattering calculations that locate their origin in the anisotropy of the atoms' potential energy surface. Our results therefore reveal chaotic behaviour in the native interaction between ultracold atoms.
Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance
NASA Astrophysics Data System (ADS)
Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman
2016-02-01
The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.
Anderson transition in a three-dimensional kicked rotor
NASA Astrophysics Data System (ADS)
Wang, Jiao; García-García, Antonio M.
2009-03-01
We investigate Anderson localization in a three-dimensional (3D) kicked rotor. By a finite-size scaling analysis we identify a mobility edge for a certain value of the kicking strength k=kc . For k>kc dynamical localization does not occur, all eigenstates are delocalized and the spectral correlations are well described by Wigner-Dyson statistics. This can be understood by mapping the kicked rotor problem onto a 3D Anderson model (AM) where a band of metallic states exists for sufficiently weak disorder. Around the critical region k≈kc we carry out a detailed study of the level statistics and quantum diffusion. In agreement with the predictions of the one parameter scaling theory (OPT) and with previous numerical simulations, the number variance is linear, level repulsion is still observed, and quantum diffusion is anomalous with ⟨p2⟩∝t2/3 . We note that in the 3D kicked rotor the dynamics is not random but deterministic. In order to estimate the differences between these two situations we have studied a 3D kicked rotor in which the kinetic term of the associated evolution matrix is random. A detailed numerical comparison shows that the differences between the two cases are relatively small. However in the deterministic case only a small set of irrational periods was used. A qualitative analysis of a much larger set suggests that deviations between the random and the deterministic kicked rotor can be important for certain choices of periods. Heuristically it is expected that localization effects will be weaker in a nonrandom potential since destructive interference will be less effective to arrest quantum diffusion. However we have found that certain choices of irrational periods enhance Anderson localization effects.
Data-Driven Learning of Q-Matrix
ERIC Educational Resources Information Center
Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2012-01-01
The recent surge of interests in cognitive assessment has led to developments of novel statistical models for diagnostic classification. Central to many such models is the well-known "Q"-matrix, which specifies the item-attribute relationships. This article proposes a data-driven approach to identification of the "Q"-matrix and estimation of…
Network trending; leadership, followership and neutrality among companies: A random matrix approach
NASA Astrophysics Data System (ADS)
Mobarhan, N. S. Safavi; Saeedi, A.; Roodposhti, F. Rahnamay; Jafari, G. R.
2016-11-01
In this article, we analyze the cross-correlation between returns of different stocks to answer the following important questions. The first one is: If there exists collective behavior in a financial market, how could we detect it? And the second question is: Is there a particular company among the companies of a market as the leader of the collective behavior? Or is there no specified leadership governing the system similar to some complex systems? We use the method of random matrix theory to answer the mentioned questions. Cross-correlation matrix of index returns of four different markets is analyzed. The participation ratio quantity related to each matrices' eigenvectors and the eigenvalue spectrum is calculated. We introduce shuffled-matrix created of cross correlation matrix in such a way that the elements of the later one are displaced randomly. Comparing the participation ratio quantities obtained from a correlation matrix of a market and its related shuffled-one, on the bulk distribution region of the eigenvalues, we detect a meaningful deviation between the mentioned quantities indicating the collective behavior of the companies forming the market. By calculating the relative deviation of participation ratios, we obtain a measure to compare the markets according to their collective behavior. Answering the second question, we show there are three groups of companies: The first group having higher impact on the market trend called leaders, the second group is followers and the third one is the companies who have not a considerable role in the trend. The results can be utilized in portfolio construction.
Differential 3D Mueller-matrix mapping of optically anisotropic depolarizing biological layers
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Grytsyuk, M.; Ushenko, V. O.; Bodnar, G. B.; Vanchulyak, O.; Meglinskiy, I.
2018-01-01
The paper consists of two parts. The first part is devoted to the short theoretical basics of the method of differential Mueller-matrix description of properties of partially depolarizing layers. It was provided the experimentally measured maps of differential matrix of the 2nd order of polycrystalline structure of the histological section of rectum wall tissue. It was defined the values of statistical moments of the1st-4th orders, which characterize the distribution of matrix elements. In the second part of the paper it was provided the data of statistic analysis of birefringence and dichroism of the histological sections of connecting component of vagina wall tissue (normal and with prolapse). It were defined the objective criteria of differential diagnostics of pathologies of vagina wall.
A Deep Stochastic Model for Detecting Community in Complex Networks
NASA Astrophysics Data System (ADS)
Fu, Jingcheng; Wu, Jianliang
2017-01-01
Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...
2016-06-30
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
RANDOM MATRIX DIAGONALIZATION--A COMPUTER PROGRAM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuchel, K.; Greibach, R.J.; Porter, C.E.
A computer prograra is described which generates random matrices, diagonalizes them and sorts appropriately the resulting eigenvalues and eigenvector components. FAP and FORTRAN listings for the IBM 7090 computer are included. (auth)
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
NASA Astrophysics Data System (ADS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
Trabulsi, Manal; Oh, Tae-Ju; Eber, Robert; Weber, Daniel; Wang, Hom-Lay
2004-11-01
Enamel matrix derivative (EMD) has been shown to promote periodontal wound healing and/or regeneration when applied to tooth root surfaces in soft tissue dehiscence models. In addition, guided tissue regeneration (GTR)-based root coverage using collagen membrane (GTRC) has shown promising results. However, limited information is available regarding how EMD may influence GTRC outcome. Twenty-six patients with Miller's Class I or II gingival recession defects of 2.5 mm were recruited for the study. Subjects were randomly assigned to receive either EMD + collagen (EMDC; test group) or collagen membrane (GTRC; control group). Clinical parameters, including plaque index (PI), gingival index (GI), relative clinical attachment levels (RCAL) to the stent, recession depth (RD), recession width (RW), probing depth (PD), gingival tissue thickness (GTT), and width of keratinized gingiva (KG) were assessed at baseline, and 3 and 6 months after surgery. A repeated measure of analysis of variance (ANOVA) was used to determine differences between treatment groups and time effect. Both treatments (GTRC and EMDC) resulted in a statistically significant decrease in RD and RW between baseline and 6 months (P <0.05). However, no difference was noted between treatment groups. The percent of root coverage after 6 months was 75% for GTRC and 63% for EMDC. Complete 100% root coverage was achieved in five patients in the GTRC group, compared to only one patient in the EMDC group. There was a statistically significant gain (P <0.05) in the clinical attachment level (CAL) between baseline and 6 months in both groups, as reflected on the RCAL data. No other significant differences were noted on other clinical parameters (PD, GTT, KG, GI, and PI). GTR-based root coverage utilizing collagen membrane, with or without enamel matrix derivative, can be successfully used in obtaining gingival recession coverage. The application of EMD during GTRC procedures did not add additional benefit to the final clinical outcome.
Universality in the dynamical properties of seismic vibrations
NASA Astrophysics Data System (ADS)
Chatterjee, Soumya; Barat, P.; Mukherjee, Indranil
2018-02-01
We have studied the statistical properties of the observed magnitudes of seismic vibration data in discrete time in an attempt to understand the underlying complex dynamical processes. The observed magnitude data are taken from six different geographical locations. All possible magnitudes are considered in the analysis including catastrophic vibrations, foreshocks, aftershocks and commonplace daily vibrations. The probability distribution functions of these data sets obey scaling law and display a certain universality characteristic. To investigate the universality features in the observed data generated by a complex process, we applied Random Matrix Theory (RMT) in the framework of Gaussian Orthogonal Ensemble (GOE). For all these six places the observed data show a close fit with the predictions of RMT. This reinforces the idea of universality in the dynamical processes generating seismic vibrations.
Swain, Kalpana; Pattnaik, Satyanarayan; Mallick, Subrata; Chowdary, Korla Appana
2009-01-01
In the present investigation, controlled release gastroretentive floating drug delivery system of theophylline was developed employing response surface methodology. A 3(2) randomized full factorial design was developed to study the effect of formulation variables like various viscosity grades and contents of hydroxypropyl methylcellulose (HPMC) and their interactions on response variables. The floating lag time for all nine experimental trial batches were less than 2 min and floatation time of more than 12 h. Theophylline release from the polymeric matrix system followed non-Fickian anomalous transport. Multiple regression analysis revealed that both viscosity and content of HPMC had statistically significant influence on all dependent variables but the effect of these variables found to be nonlinear above certain threshold values.
A closed-form solution to tensor voting: theory and applications.
Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard
2012-08-01
We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
A null model for Pearson coexpression networks.
Gobbi, Andrea; Jurman, Giuseppe
2015-01-01
Gene coexpression networks inferred by correlation from high-throughput profiling such as microarray data represent simple but effective structures for discovering and interpreting linear gene relationships. In recent years, several approaches have been proposed to tackle the problem of deciding when the resulting correlation values are statistically significant. This is most crucial when the number of samples is small, yielding a non-negligible chance that even high correlation values are due to random effects. Here we introduce a novel hard thresholding solution based on the assumption that a coexpression network inferred by randomly generated data is expected to be empty. The threshold is theoretically derived by means of an analytic approach and, as a deterministic independent null model, it depends only on the dimensions of the starting data matrix, with assumptions on the skewness of the data distribution compatible with the structure of gene expression levels data. We show, on synthetic and array datasets, that the proposed threshold is effective in eliminating all false positive links, with an offsetting cost in terms of false negative detected edges.
Complex Langevin simulation of a random matrix model at nonzero chemical potential
Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...
2018-03-06
In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less
Complex Langevin simulation of a random matrix model at nonzero chemical potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.
In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less
Statistical Analysis of Big Data on Pharmacogenomics
Fan, Jianqing; Liu, Han
2013-01-01
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
Record statistics of financial time series and geometric random walks
NASA Astrophysics Data System (ADS)
Sabir, Behlool; Santhanam, M. S.
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.
Wave Propagation inside Random Media
NASA Astrophysics Data System (ADS)
Cheng, Xiaojun
This thesis presents results of studies of wave scattering within and transmission through random and periodic systems. The main focus is on energy profiles inside quasi-1D and 1D random media. The connection between transport and the states of the medium is manifested in the equivalence of the dimensionless conductance, g, and the Thouless number which is the ratio of the average linewidth and spacing of energy levels. This equivalence and theories regarding the energy profiles inside random media are based on the assumption that LDOS is uniform throughout the samples. We have conducted microwave measurements of the longitudinal energy profiles within disordered samples contained in a copper tube supporting multiple waveguide channels with an antenna moving along a slit on the tube. These measurements allow us to determine the local density of states (LDOS) at a location which is the sum of energy from all incoming channels on both sides. For diffusive samples, the LDOS is uniform and the energy profile decays linearly as expected. However, for localized samples, we find that the LDOS drops sharply towards the middle of the sample and the energy profile does not follow the result of the local diffusion theory where the LDOS is assumed to be uniform. We analyze the field spectra into quasi-normal modes and found that the mode linewidth and the number of modes saturates as the sample length increases. Thus the Thouless number saturates while the dimensionless conductance g continues to fall with increasing length, indicating that the modes are localized near the boundaries. This is in contrast to the general believing that g and Thouless number follow the same scaling behavior. Previous measurements show that single parameter scaling (SPS) still holds in the same sample where the LDOS is suppressed te{shi2014microwave}. We explore the extension of SPS to the interior of the sample by analyzing statistics of the logrithm of the energy density ln W(x) and found that =-x/l where l is the transport mean free path. The result does not depend on the sample length, which is counterintuitive yet remarkably simple. More supprisingly, the linear fall-off of energy profile holds for totally disordered random 1D layered samples in simulations where the LDOS is uniform as well as for single mode random waveguide experiments and 1D nearly periodic samples where the LDOS is suppressed in the middle of the sample. The generalization of the transmission matrix to the interior of quasi-1D random samples, which is defined as the field matrix, and its eigenvalues statistics are also discussed. The maximum energy deposition at a location is not the intensity of the first transmission eigenchannel but the eigenvalue of the first energy density eigenchannels at that cross section, which can be much greater than the average value. The contrast, which is the ratio of the intensity at the focused point to the background intensity, in optimal focusing is determined by the participation number of the energy density eigenvalues and its inverse gives the variance of the energy density at that cross section in a single configuration. We have also studied topological states in photonic structures. We have demonstrated robust propagation of electromagnetic waves along reconfigurable pathways within a topological photonic metacrystal. Since the wave is confined within the domain wall, which is the boundary between two distinct topological insulating systems, we can freely steer the wave by reconstructing the photonic structure. Other topics, such as speckle pattern evolutions and the effects of boundary conditions on the statistics of transmission eigenvalues and energy profiles are also discussed.
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design
ERIC Educational Resources Information Center
Wagler, Amy; Wagler, Ron
2014-01-01
Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…
Li, Longbiao
2016-01-01
In this paper, the fatigue life of fiber-reinforced ceramic-matrix composites (CMCs) with different fiber preforms, i.e., unidirectional, cross-ply, 2D (two dimensional), 2.5D and 3D CMCs at room and elevated temperatures in air and oxidative environments, has been predicted using the micromechanics approach. An effective coefficient of the fiber volume fraction along the loading direction (ECFL) was introduced to describe the fiber architecture of preforms. The statistical matrix multicracking model and fracture mechanics interface debonding criterion were used to determine the matrix crack spacing and interface debonded length. Under cyclic fatigue loading, the fiber broken fraction was determined by combining the interface wear model and fiber statistical failure model at room temperature, and interface/fiber oxidation model, interface wear model and fiber statistical failure model at elevated temperatures, based on the assumption that the fiber strength is subjected to two-parameter Weibull distribution and the load carried by broken and intact fibers satisfies the Global Load Sharing (GLS) criterion. When the broken fiber fraction approaches the critical value, the composites fatigue fracture. PMID:28773332
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Forecasts of non-Gaussian parameter spaces using Box-Cox transformations
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.
2011-09-01
Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.
Entanglement spectrum of random-singlet quantum critical points
NASA Astrophysics Data System (ADS)
Fagotti, Maurizio; Calabrese, Pasquale; Moore, Joel E.
2011-01-01
The entanglement spectrum (i.e., the full distribution of Schmidt eigenvalues of the reduced density matrix) contains more information than the conventional entanglement entropy and has been studied recently in several many-particle systems. We compute the disorder-averaged entanglement spectrum in the form of the disorder-averaged moments TrρAα̲ of the reduced density matrix ρA for a contiguous block of many spins at the random-singlet quantum critical point in one dimension. The result compares well in the scaling limit with numerical studies on the random XX model and is also expected to describe the (interacting) random Heisenberg model. Our numerical studies on the XX case reveal that the dependence of the entanglement entropy and spectrum on the geometry of the Hilbert space partition is quite different than for conformally invariant critical points.
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
Random matrix theory and fund of funds portfolio optimisation
NASA Astrophysics Data System (ADS)
Conlon, T.; Ruskin, H. J.; Crane, M.
2007-08-01
The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.
NASA Astrophysics Data System (ADS)
Kandrup, Henry E.
1988-06-01
This paper reexamines the statistical quantum field theory of a free, minimally coupled, real scalar field Φ in a statically bounded, classical Friedmann cosmology, where the time-dependent scale factor Ω(t) tends to constant values Ω1 and Ω2 for t
Communication Optimal Parallel Multiplication of Sparse Random Matrices
2013-02-21
Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That
Medeiros Turra, Kely; Pineda Rivelli, Diogo; Berlanga de Moraes Barros, Silvia; Mesquita Pasqualoto, Kerly Fernanda
2016-07-01
A receptor-independent (RI) four-dimensional structure-activity relationship (4D-QSAR) formalism was applied to a set of sixty-four β-N-biaryl ether sulfonamide hydroxamate derivatives, previously reported as potent inhibitors against matrix metalloproteinase subtype 9 (MMP-9). MMP-9 belongs to a group of enzymes related to the cleavage of several extracellular matrix components and has been associated to cancer invasiveness/metastasis. The best RI 4D-QSAR model was statistically significant (N=47; r(2) =0.91; q(2) =0.83; LSE=0.09; LOF=0.35; outliers=0). Leave-N-out (LNO) and y-randomization approaches indicated the QSAR model was robust and presented no chance correlation, respectively. Furthermore, it also had good external predictability (82 %) regarding the test set (N=17). In addition, the grid cell occupancy descriptors (GCOD) of the predicted bioactive conformation for the most potent inhibitor were successfully interpreted when docked into the MMP-9 active site. The 3D-pharmacophore findings were used to predict novel ligands and exploit the MMP-9 calculated binding affinity through molecular docking procedure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cho, Yi Je; Lee, Wookjin; Park, Yong Ho
2017-01-01
The elastoplastic deformation behaviors of hollow glass microspheres/iron syntactic foam under tension were modeled using a representative volume element (RVE) approach. The three-dimensional microstructures of the iron syntactic foam with 5 wt % glass microspheres were reconstructed using the random sequential adsorption algorithm. The constitutive behavior of the elastoplasticity in the iron matrix and the elastic-brittle failure for the glass microsphere were simulated in the models. An appropriate RVE size was statistically determined by evaluating elastic modulus, Poisson’s ratio, and yield strength in terms of model sizes and boundary conditions. The model was validated by the agreement with experimental findings. The tensile deformation mechanism of the syntactic foam considering the fracture of the microspheres was then investigated. In addition, the feasibility of introducing the interfacial deboning behavior to the proposed model was briefly investigated to improve the accuracy in depicting fracture behaviors of the syntactic foam. It is thought that the modeling techniques and the model itself have major potential for applications not only in the study of hollow glass microspheres/iron syntactic foams, but also for the design of composites with a high modulus matrix and high strength reinforcement. PMID:29048346
Improving stochastic estimates with inference methods: calculating matrix diagonals.
Selig, Marco; Oppermann, Niels; Ensslin, Torsten A
2012-02-01
Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. © 2012 American Physical Society
ERIC Educational Resources Information Center
Tintle, Nathan; Topliff, Kylie; VanderStoep, Jill; Holmes, Vicki-Lynn; Swanson, Todd
2012-01-01
Previous research suggests that a randomization-based introductory statistics course may improve student learning compared to the consensus curriculum. However, it is unclear whether these gains are retained by students post-course. We compared the conceptual understanding of a cohort of students who took a randomization-based curriculum (n = 76)…
Breaking time reversal in a simple smooth chaotic system.
Tomsovic, Steven; Ullmo, Denis; Nagano, Tatsuro
2003-06-01
Within random matrix theory, the statistics of the eigensolutions depend fundamentally on the presence (or absence) of time reversal symmetry. Accepting the Bohigas-Giannoni-Schmit conjecture, this statement extends to quantum systems with chaotic classical analogs. For practical reasons, much of the supporting numerical studies of symmetry breaking have been done with billiards or maps, and little with simple, smooth systems. There are two main difficulties in attempting to break time reversal invariance in a continuous time system with a smooth potential. The first is avoiding false time reversal breaking. The second is locating a parameter regime in which the symmetry breaking is strong enough to transform the fluctuation properties fully to the broken symmetry case, and yet remain weak enough so as not to regularize the dynamics sufficiently that the system is no longer chaotic. We give an example of a system of two coupled quartic oscillators whose energy level statistics closely match with those of the Gaussian unitary ensemble, and which possesses only a minor proportion of regular motion in its phase space.
NASA Astrophysics Data System (ADS)
Baez, J.; Lapidaryus, M.; Siegel, Edward Carl-Ludwig
2011-03-01
Riemann-hypothesis physics-proof combines: Siegel-Antonoff-Smith[AMS Joint Mtg.(2002)-Abs.973-03-126] digits on-average statistics HIll[Am. J. Math 123, 3, 887(1996)] logarithm-function's (1,0)-fixed-point base=units=scale-invariance proven Newcomb[Am. J. Math. 4, 39(1881)]-Weyl[Goett. Nachr.(1914); Math. Ann. 7, 313(1916)]-Benford[Proc. Am. Phil. Soc. 78, 4, 51(1938)]-law [Kac, Math. of Stat.-Reasoning(1955); Raimi, Sci. Am. 221, 109(1969)] algebraic-inversion to ONLY Bose-Einstein quantum-statistics(BEQS) with digit d = 0 gapFUL Bose-Einstein Condensation(BEC) insight that digits are quanta are bosons were always digits, via Siegel-Baez category-semantics tabular list-format matrix truth-table analytics in Plato-Aristotle classic "square-of-opposition" : FUZZYICS=CATEGORYICS/Category-Semantics, with Goodkind Bose-Einstein condensation(BEC) ABOVE ground-state with/and Rayleigh(cut-limit of "short-cut method";1870)-Polya(1922)-"Anderson"(1958) localization [Doyle and Snell, Random-Walks and Electrical-Networks, MAA(1981)-p.99-100!!!].
Laser diagnostics of native cervix dabs with human papilloma virus in high carcinogenic risk
NASA Astrophysics Data System (ADS)
Peresunko, O. P.; Karpenko, Ju. G.; Burkovets, D. N.; Ivashko, P. V.; Nikorych, A. V.; Yermolenko, S. B.; Gruia, Ion; Gruia, M. J.
2015-11-01
The results of experimental studies of coordinate distributions of Mueller matrix elements of the following types of cervical scraping tissue are presented: rate- low-grade - highly differentiated dysplasia (CIN1-CIN3) - adenocarcinoma of high, medium and low levels of differentiation (G1-G3). The rationale for the choice of statistical points 1-4 orders polarized coherent radiation field, transformed as a result of interaction with the oncologic modified biological layers "epithelium-stroma" as a quantitative criterion of polarimetric optical differentiation state of human biological tissues are shown here. The analysis of the obtained Mueller matrix elements and statistical correlation methods, the systematized by types studied tissues is accomplished. The results of research images of Mueller matrix elements m34 for this type of pathology as low-grade dysplasia (CIN2), the results of its statistical and correlation analysis are presented.
Nishiyama, Yoshihiro
2002-12-01
It has been considered that the effective bending rigidity of fluid membranes should be reduced by thermal undulations. However, recent thorough investigation by Pinnow and Helfrich revealed the significance of measure factors for the partition sum. Accepting the local curvature as a statistical measure, they found that fluid membranes are stiffened macroscopically. In order to examine this remarkable idea, we performed extensive ab initio simulations for a fluid membrane. We set up a transfer matrix that is diagonalized by means of the density-matrix renormalization group. Our method has an advantage, in that it allows us to survey various statistical measures. As a consequence, we found that the effective bending rigidity flows toward strong coupling under the choice of local curvature as a statistical measure. On the contrary, for other measures such as normal displacement and tilt angle, we found a clear tendency toward softening.
Management of gingival recession with acellular dermal matrix graft: A clinical study
Balaji, V. R.; Ramakrishnan, T.; Manikandan, D.; Lambodharan, R.; Karthikeyan, B.; Niazi, Thanvir Mohammed; Ulaganathan, G.
2016-01-01
Aims and Objectives: Obtaining root coverage has become an important part of periodontal therapy. The aims of this studyare to evaluate the clinical efficacy of acellular dermal matrix graft in the coverage of denuded roots and also to examine the change in the width of keratinized gingiva. Materials and Methods: A total of 20 sites with more than or equal to 2 mm of recession depth were taken into the study, for treatment with acellular dermal matrix graft. The clinical parameters such as recession depth, recession width, width of keratinized gingiva, probing pocket depth (PD), and clinical attachment level (CAL) were measured at the baseline, 8th week, and at the end of the study (16th week). The defects were treated with a coronally positioned pedicle graft combined with acellular dermal matrix graft. Results: Out of 20 sites treated with acellular dermal matrix graft, seven sites showed complete root coverage (100%), and the mean root coverage obtained was 73.39%. There was a statistically significant reduction in recession depth, recession width, and probing PD. There was also a statistically significant increase in width of keratinized gingiva and also gain in CAL. The postoperative results were both clinically and statistically significant (P < 0.0001). Conclusion: The results of this study were esthetically acceptable to the patients and clinically acceptable in all cases. From this study, it may be concluded that acellular dermal matrix graft is an excellent substitute for autogenous graft in coverage of denuded roots. PMID:27829749
Gholami, Gholam Ali; Saberi, Arezoo; Kadkhodazadeh, Mahdi; Amid, Reza; Karami, Daryoosh
2013-01-01
Background: Different techniques have been proposed for the treatment of gingival recession. The majority of current procedures use autogenous soft-tissue grafts, which are associated with morbidity at the donor sites. Acellular dermal matrix (ADM) Alloderm is an alternative donor material presented to reduce related morbidity and provide more volume of the donor tissue. This study aimed to evaluate the effectiveness of an ADM allograft for root coverage and to compare it with a connective tissue graft (CTG), when used with a double papillary flap. Materials and Methods: Sixteen patients with bilateral class I or II gingival recessions were selected. A total of 32 recessions were treated and randomly assigned into the test and contralateral recessions into the control group. In the control group, the exposed root surfaces were treated by the placement of a CTG in combination with a double papillary flap; and in the test group, an ADM allograft was used as a substitute for palatal donor tissue. Probing depth, clinical attachment level, width of keratinized tissue (KT), recession height and width were measured before, and after 2 weeks and 6 months of surgery. Results: There were no statistically significant differences between the test and control groups in terms of recession reduction, clinical attachment gain, and reduction in probing depth. The control group had a statistically significant increased area of KT after 6 months compared to the test group. Conclusion: ADM allograft can be considered as a substitute for palatal donor tissue in root coverage procedure. PMID:24130587
Tensor Decompositions for Learning Latent Variable Models
2012-12-08
and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k
Structure of local interactions in complex financial dynamics
Jiang, X. F.; Chen, T. T.; Zheng, B.
2014-01-01
With the network methods and random matrix theory, we investigate the interaction structure of communities in financial markets. In particular, based on the random matrix decomposition, we clarify that the local interactions between the business sectors (subsectors) are mainly contained in the sector mode. In the sector mode, the average correlation inside the sectors is positive, while that between the sectors is negative. Further, we explore the time evolution of the interaction structure of the business sectors, and observe that the local interaction structure changes dramatically during a financial bubble or crisis. PMID:24936906
Turner, John B; Corazzini, Rubina L; Butler, Timothy J; Garlick, David S; Rinker, Brian D
2015-09-01
Reduction of peritendinous adhesions after injury and repair has been the subject of extensive prior investigation. The application of a circumferential barrier at the repair site may limit the quantity of peritendinous adhesions while preserving the tendon's innate ability to heal. The authors compare the effectiveness of a type I/III collagen membrane and a collagen-glycosaminoglycan (GAG) resorbable matrix in reducing tendon adhesions in an experimental chicken model of a "zone II" tendon laceration and repair. In Leghorn chickens, flexor tendons were sharply divided using a scalpel and underwent repair in a standard fashion (54 total repairs). The sites were treated with a type I/III collagen membrane, collagen-GAG resorbable matrix, or saline in a randomized fashion. After 3 weeks, qualitative and semiquantitative histological analysis was performed to evaluate the "extent of peritendinous adhesions" and "nature of tendon healing." The data was evaluated with chi-square analysis and unpaired Student's t test. For both collagen materials, there was a statistically significant improvement in the degree of both extent of peritendinous adhesions and nature of tendon healing relative to the control group. There was no significant difference seen between the two materials. There was one tendon rupture observed in each treatment group. Surgical handling characteristics were subjectively favored for type I/III collagen membrane over the collagen-GAG resorbable matrix. The ideal method of reducing clinically significant tendon adhesions after injury remains elusive. Both materials in this study demonstrate promise in reducing tendon adhesions after flexor tendon repair without impeding tendon healing in this model.
Rocha Dos Santos, Manuela; Sangiorgio, João Paulo Menck; Neves, Felipe Lucas da Silva; França-Grohmann, Isabela Lima; Nociti, Francisco Humberto; Silverio Ruiz, Karina Gonzales; Santamaria, Mauro Pedrine; Sallum, Enilson Antonio
2017-12-01
Gingival recession (GR) might be associated with patient discomfort due to cervical dentin hypersensitivity (CDH) and esthetic dissatisfaction. The aim is to evaluate the effect of root coverage procedure with a xenogenous collagen matrix (CM) and/or enamel matrix derivative (EMD) in combination with a coronally advanced flap (CAF) on CDH, esthetics, and oral health-related quality of life (OHRQoL) of patients with GR. Sixty-eight participants with single Miller Class I/II GRs were treated with CAF (n = 17), CAF + CM (n = 17), CAF + EMD (n = 17), and CAF + CM + EMD (n = 17). CDH was assessed by evaporative stimuli using a visual analog scale (VAS) and a Schiff scale. Esthetics outcome was assessed with VAS and the Questionnaire of Oral Esthetic Satisfaction. Oral Health Impact Profile-14 (OHIP-14) questionnaire was used to assess OHRQoL. All parameters were evaluated at baseline and after 6 months. Intragroup analysis showed statistically significant reduction in CDH and esthetic dissatisfaction with no intergroup significant differences (P >0.05). The impact of oral health on QoL after 6 months was significant for CAF + CM, CAF + EMD, and CAF + CM + EMD (P <0.05). Total OHIP-14 score and psychologic discomfort, psychologic disability, social disability, and handicap dimensions showed negative correlation with esthetics. OHIP-14 physical pain dimension had positive correlation with CDH (P <0.05). OHIP-14 showed no correlation with percentage of root coverage, keratinized tissue width, or keratinized tissue thickness (P >0.05). Root coverage procedures improve patient OHRQoL by impacting on a wide range of dimensions, perceived after reduction of CDH and esthetic dissatisfaction of patients with GRs treated with CAF + CM, CAF + EMD, and CAF + CM + EMD.
Targeting functional motifs of a protein family
NASA Astrophysics Data System (ADS)
Bhadola, Pradeep; Deo, Nivedita
2016-10-01
The structural organization of a protein family is investigated by devising a method based on the random matrix theory (RMT), which uses the physiochemical properties of the amino acid with multiple sequence alignment. A graphical method to represent protein sequences using physiochemical properties is devised that gives a fast, easy, and informative way of comparing the evolutionary distances between protein sequences. A correlation matrix associated with each property is calculated, where the noise reduction and information filtering is done using RMT involving an ensemble of Wishart matrices. The analysis of the eigenvalue statistics of the correlation matrix for the β -lactamase family shows the universal features as observed in the Gaussian orthogonal ensemble (GOE). The property-based approach captures the short- as well as the long-range correlation (approximately following GOE) between the eigenvalues, whereas the previous approach (treating amino acids as characters) gives the usual short-range correlations, while the long-range correlations are the same as that of an uncorrelated series. The distribution of the eigenvector components for the eigenvalues outside the bulk (RMT bound) deviates significantly from RMT observations and contains important information about the system. The information content of each eigenvector of the correlation matrix is quantified by introducing an entropic estimate, which shows that for the β -lactamase family the smallest eigenvectors (low eigenmodes) are highly localized as well as informative. These small eigenvectors when processed gives clusters involving positions that have well-defined biological and structural importance matching with experiments. The approach is crucial for the recognition of structural motifs as shown in β -lactamase (and other families) and selectively identifies the important positions for targets to deactivate (activate) the enzymatic actions.
Measuring forest landscape patterns in the Cascade Range of Oregon, USA
NASA Technical Reports Server (NTRS)
Ripple, William J.; Bradshaw, G. A.; Spies, Thomas A.
1995-01-01
This paper describes the use of a set of spatial statistics to quantify the landscape pattern caused by the patchwork of clearcuts made over a 15-year period in the western Cascades of Oregon. Fifteen areas were selected at random to represent a diversity of landscape fragmentation patterns. Managed forest stands (patches) were digitized and analyzed to produce both tabular and mapped information describing patch size, shape, abundance and spacing, and matrix characteristics of a given area. In addition, a GIS fragmentation index was developed which was found to be sensitive to patch abundance and to the spatial distribution of patches. Use of the GIS-derived index provides an automated method of determining the level of forest fragmentation and can be used to facilitate spatial analysis of the landscape for later coordination with field and remotely sensed data. A comparison of the spatial statistics calculated for the two years indicates an increase in forest fragmentation as characterized by an increase in mean patch abundance and a decrease in interpatch distance, amount of interior natural forest habitat, and the GIS fragmentation index. Such statistics capable of quantifying patch shape and spatial distribution may prove important in the evaluation of the changing character of interior and edge habitats for wildlife.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Why Are People Bad at Detecting Randomness? A Statistical Argument
ERIC Educational Resources Information Center
Williams, Joseph J.; Griffiths, Thomas L.
2013-01-01
Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent statistical difficulty of the task. Our account is based on a Bayesian statistical analysis, focusing on the fact that a random process is a special case of…
Copolymers For Capillary Gel Electrophoresis
Liu, Changsheng; Li, Qingbo
2005-08-09
This invention relates to an electrophoresis separation medium having a gel matrix of at least one random, linear copolymer comprising a primary comonomer and at least one secondary comonomer, wherein the comonomers are randomly distributed along the copolymer chain. The primary comonomer is an acrylamide or an acrylamide derivative that provides the primary physical, chemical, and sieving properties of the gel matrix. The at least one secondary comonomer imparts an inherent physical, chemical, or sieving property to the copolymer chain. The primary and secondary comonomers are present in a ratio sufficient to induce desired properties that optimize electrophoresis performance. The invention also relates to a method of separating a mixture of biological molecules using this gel matrix, a method of preparing the novel electrophoresis separation medium, and a capillary tube filled with the electrophoresis separation medium.
Removal of Stationary Sinusoidal Noise from Random Vibration Signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian; Cap, Jerome S.
In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less
Simple Emergent Power Spectra from Complex Inflationary Physics
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2016-09-01
We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.
Simple Emergent Power Spectra from Complex Inflationary Physics.
Dias, Mafalda; Frazer, Jonathan; Marsh, M C David
2016-09-30
We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okabe, T.; Takeda, N.; Komotori, J.
1999-11-26
A new model is proposed for multiple matrix cracking in order to take into account the role of matrix-rich regions in the cross section in initiating crack growth. The model is used to predict the matrix cracking stress and the total number of matrix cracks. The model converts the matrix-rich regions into equivalent penny shape crack sizes and predicts the matrix cracking stress with a fracture mechanics crack-bridging model. The estimated distribution of matrix cracking stresses is used as statistical input to predict the number of matrix cracks. The results show good agreement with the experimental results by replica observations.more » Therefore, it is found that the matrix cracking behavior mainly depends on the distribution of matrix-rich regions in the composite.« less
NASA Astrophysics Data System (ADS)
Nan, Hanqing; Liang, Long; Chen, Guo; Liu, Liyu; Liu, Ruchuan; Jiao, Yang
2018-03-01
Three-dimensional (3D) collective cell migration in a collagen-based extracellular matrix (ECM) is among one of the most significant topics in developmental biology, cancer progression, tissue regeneration, and immune response. Recent studies have suggested that collagen-fiber mediated force transmission in cellularized ECM plays an important role in stress homeostasis and regulation of collective cellular behaviors. Motivated by the recent in vitro observation that oriented collagen can significantly enhance the penetration of migrating breast cancer cells into dense Matrigel which mimics the intravasation process in vivo [Han et al. Proc. Natl. Acad. Sci. USA 113, 11208 (2016), 10.1073/pnas.1610347113], we devise a procedure for generating realizations of highly heterogeneous 3D collagen networks with prescribed microstructural statistics via stochastic optimization. Specifically, a collagen network is represented via the graph (node-bond) model and the microstructural statistics considered include the cross-link (node) density, valence distribution, fiber (bond) length distribution, as well as fiber orientation distribution. An optimization problem is formulated in which the objective function is defined as the squared difference between a set of target microstructural statistics and the corresponding statistics for the simulated network. Simulated annealing is employed to solve the optimization problem by evolving an initial network via random perturbations to generate realizations of homogeneous networks with randomly oriented fibers, homogeneous networks with aligned fibers, heterogeneous networks with a continuous variation of fiber orientation along a prescribed direction, as well as a binary system containing a collagen region with aligned fibers and a dense Matrigel region with randomly oriented fibers. The generation and propagation of active forces in the simulated networks due to polarized contraction of an embedded ellipsoidal cell and a small group of cells are analyzed by considering a nonlinear fiber model incorporating strain hardening upon large stretching and buckling upon compression. Our analysis shows that oriented fibers can significantly enhance long-range force transmission in the network. Moreover, in the oriented-collagen-Matrigel system, the forces generated by a polarized cell in collagen can penetrate deeply into the Matrigel region. The stressed Matrigel fibers could provide contact guidance for the migrating cell cells, and thus enhance their penetration into Matrigel. This suggests a possible mechanism for the observed enhanced intravasation by oriented collagen.
Statistical Refinement of the Q-Matrix in Cognitive Diagnosis
ERIC Educational Resources Information Center
Chiu, Chia-Yi
2013-01-01
Most methods for fitting cognitive diagnosis models to educational test data and assigning examinees to proficiency classes require the Q-matrix that associates each item in a test with the cognitive skills (attributes) needed to answer it correctly. In most cases, the Q-matrix is not known but is constructed from the (fallible) judgments of…
System of Mueller-Jones matrix polarizing mapping of blood plasma films in breast pathology
NASA Astrophysics Data System (ADS)
Zabolotna, Natalia I.; Radchenko, Kostiantyn O.; Tarnovskiy, Mykola H.
2017-08-01
The combined method of Jones-Mueller matrix mapping and blood plasma films analysis based on the system that proposed in this paper. Based on the obtained data about the structure and state of blood plasma samples the diagnostic conclusions can be make about the state of breast cancer patients ("normal" or "pathology"). Then, by using the statistical analysis obtain statistical and correlational moments for every coordinate distributions; these indicators are served as diagnostic criterias. The final step is to comparing results and choosing the most effective diagnostic indicators. The paper presents the results of Mueller-Jones matrix mapping of optically thin (attenuation coefficient ,τ≤0,1) blood plasma layers.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Kim, Maru; Song, In-Guk; Kim, Hyung Jin
2015-06-01
The aim of this study was to compare the result of electrocauterization and curettage, which can be done with basic instruments. Patients with ingrown nail were randomized to 2 groups. In the first group, nail matrix was removed by curettage, and the second group, nail matrix was removed by electrocautery. A total of 61 patients were enrolled; 32 patients were operated by curettage, and 29 patients were operated by electrocautery. Wound infections, as early complication, were found in 15.6% (5/32) of the curettage group, 10.3% (3/29) of the electrocautery group patients each (P = .710). Nonrecurrence was observed in 93.8% (30/32) and 86.2% (25/29) of the curettage and electrocautery groups, respectively, (lower limit of 1-sided 90% confidence interval = -2.3% > -15% [noninferiority margin]). To remove nail matrix, the curettage is effective as well as the electrocauterization. Further study is required to determine the differences between the procedures. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2016-11-01
Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.
A Multivariate Randomization Text of Association Applied to Cognitive Test Results
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Beard, Bettina
2009-01-01
Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.
Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments
2013-12-11
positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for
NASA Astrophysics Data System (ADS)
Han, Rui-Qi; Xie, Wen-Jie; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
The correlation structure of a stock market contains important financial contents, which may change remarkably due to the occurrence of financial crisis. We perform a comparative analysis of the Chinese stock market around the occurrence of the 2008 crisis based on the random matrix analysis of high-frequency stock returns of 1228 Chinese stocks. Both raw correlation matrix and partial correlation matrix with respect to the market index in two time periods of one year are investigated. We find that the Chinese stocks have stronger average correlation and partial correlation in 2008 than in 2007 and the average partial correlation is significantly weaker than the average correlation in each period. Accordingly, the largest eigenvalue of the correlation matrix is remarkably greater than that of the partial correlation matrix in each period. Moreover, each largest eigenvalue and its eigenvector reflect an evident market effect, while other deviating eigenvalues do not. We find no evidence that deviating eigenvalues contain industrial sectorial information. Surprisingly, the eigenvectors of the second largest eigenvalues in 2007 and of the third largest eigenvalues in 2008 are able to distinguish the stocks from the two exchanges. We also find that the component magnitudes of the some largest eigenvectors are proportional to the stocks’ capitalizations.
Liu, Chuanjun; Wyszynski, Bartosz; Yatabe, Rui; Hayashi, Kenshi; Toko, Kiyoshi
2017-02-16
The detection and recognition of metabolically derived aldehydes, which have been identified as important products of oxidative stress and biomarkers of cancers; are considered as an effective approach for early cancer detection as well as health status monitoring. Quartz crystal microbalance (QCM) sensor arrays based on molecularly imprinted sol-gel (MISG) materials were developed in this work for highly sensitive detection and highly selective recognition of typical aldehyde vapors including hexanal (HAL); nonanal (NAL) and bezaldehyde (BAL). The MISGs were prepared by a sol-gel procedure using two matrix precursors: tetraethyl orthosilicate (TEOS) and tetrabutoxytitanium (TBOT). Aminopropyltriethoxysilane (APT); diethylaminopropyltrimethoxysilane (EAP) and trimethoxy-phenylsilane (TMP) were added as functional monomers to adjust the imprinting effect of the matrix. Hexanoic acid (HA); nonanoic acid (NA) and benzoic acid (BA) were used as psuedotemplates in view of their analogous structure to the target molecules as well as the strong hydrogen-bonding interaction with the matrix. Totally 13 types of MISGs with different components were prepared and coated on QCM electrodes by spin coating. Their sensing characters towards the three aldehyde vapors with different concentrations were investigated qualitatively. The results demonstrated that the response of individual sensors to each target strongly depended on the matrix precursors; functional monomers and template molecules. An optimization of the 13 MISG materials was carried out based on statistical analysis such as principle component analysis (PCA); multivariate analysis of covariance (MANCOVA) and hierarchical cluster analysis (HCA). The optimized sensor array consisting of five channels showed a high discrimination ability on the aldehyde vapors; which was confirmed by quantitative comparison with a randomly selected array. It was suggested that both the molecularly imprinting (MIP) effect and the matrix effect contributed to the sensitivity and selectivity of the optimized sensor array. The developed MISGs were expected to be promising materials for the detection and recognition of volatile aldehydes contained in exhaled breath or human body odor.
Liu, Chuanjun; Wyszynski, Bartosz; Yatabe, Rui; Hayashi, Kenshi; Toko, Kiyoshi
2017-01-01
The detection and recognition of metabolically derived aldehydes, which have been identified as important products of oxidative stress and biomarkers of cancers; are considered as an effective approach for early cancer detection as well as health status monitoring. Quartz crystal microbalance (QCM) sensor arrays based on molecularly imprinted sol-gel (MISG) materials were developed in this work for highly sensitive detection and highly selective recognition of typical aldehyde vapors including hexanal (HAL); nonanal (NAL) and bezaldehyde (BAL). The MISGs were prepared by a sol-gel procedure using two matrix precursors: tetraethyl orthosilicate (TEOS) and tetrabutoxytitanium (TBOT). Aminopropyltriethoxysilane (APT); diethylaminopropyltrimethoxysilane (EAP) and trimethoxy-phenylsilane (TMP) were added as functional monomers to adjust the imprinting effect of the matrix. Hexanoic acid (HA); nonanoic acid (NA) and benzoic acid (BA) were used as psuedotemplates in view of their analogous structure to the target molecules as well as the strong hydrogen-bonding interaction with the matrix. Totally 13 types of MISGs with different components were prepared and coated on QCM electrodes by spin coating. Their sensing characters towards the three aldehyde vapors with different concentrations were investigated qualitatively. The results demonstrated that the response of individual sensors to each target strongly depended on the matrix precursors; functional monomers and template molecules. An optimization of the 13 MISG materials was carried out based on statistical analysis such as principle component analysis (PCA); multivariate analysis of covariance (MANCOVA) and hierarchical cluster analysis (HCA). The optimized sensor array consisting of five channels showed a high discrimination ability on the aldehyde vapors; which was confirmed by quantitative comparison with a randomly selected array. It was suggested that both the molecularly imprinting (MIP) effect and the matrix effect contributed to the sensitivity and selectivity of the optimized sensor array. The developed MISGs were expected to be promising materials for the detection and recognition of volatile aldehydes contained in exhaled breath or human body odor. PMID:28212347
Proksch, E; Schunck, M; Zague, V; Segger, D; Degwert, J; Oesser, S
2014-01-01
Dietary consumption of food supplements has been found to modulate skin functions and can therefore be useful in the treatment of skin aging. However, there is only a limited number of clinical studies supporting these claims. In this double-blind, placebo-controlled study, the effectiveness of the specific bioactive collagen peptide (BCP) VERISOL® on eye wrinkle formation and stimulation of procollagen I, elastin and fibrillin biosynthesis in the skin was assessed. A hundred and fourteen women aged 45-65 years were randomized to receive 2.5 g of BCP or placebo, once daily for 8 weeks, with 57 subjects being allocated to each treatment group. Skin wrinkles were objectively measured in all subjects, before starting the treatment, after 4 and 8 weeks as well as 4 weeks after the last intake (4-week regression phase). A subgroup was established for suction blister biopsies analyzing procollagen I, elastin and fibrillin at the beginning of the treatment and after 8 weeks of intake. The ingestion of the specific BCP used in this study promoted a statistically significant reduction of eye wrinkle volume (p < 0.05) in comparison to the placebo group after 4 and 8 weeks (20%) of intake. Moreover a positive long-lasting effect was observed 4 weeks after the last BCP administration (p < 0.05). Additionally, after 8 weeks of intake a statistically significantly higher content of procollagen type I (65%) and elastin (18%) in the BCP-treated volunteers compared to the placebo-treated patients was detected. For fibrillin, a 6% increase could be determined after BCP treatment compared to the placebo, but this effect failed to reach the level of statistical significance. In conclusion, our findings demonstrate that the oral intake of specific bioactive collagen peptides (Verisol®) reduced skin wrinkles and had positive effects on dermal matrix synthesis. © 2014 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Ushenko, Yuriy A.; Koval, Galina D.; Ushenko, Alexander G.; Dubolazov, Olexander V.; Ushenko, Vladimir A.; Novakovskaia, Olga Yu.
2016-07-01
This research presents investigation results of the diagnostic efficiency of an azimuthally stable Mueller-matrix method of analysis of laser autofluorescence of polycrystalline films of dried uterine cavity peritoneal fluid. A model of the generalized optical anisotropy of films of dried peritoneal fluid is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase (linear and circular birefringence) and amplitude (linear and circular dichroism) anisotropies is taken into consideration. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistical analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the first to the fourth order) of differentiation of polycrystalline films of dried peritoneal fluid, group 1 (healthy donors) and group 2 (uterus endometriosis patients), are determined.
Detection Performance of Horizontal Linear Hydrophone Arrays in Shallow Water.
1980-12-15
random phase G gain G angle interval covariance matrix h processor vector H matrix matched filter; generalized beamformer I unity matrix 4 SACLANTCEN SR...omnidirectional sensor is h*Ph P G = - h [Eq. 47] G = h* Q h P s The following two sections evaluate a few examples of application of the OLP. Following the...At broadside the signal covariance matrix reduces to a dyadic: P s s*;therefore, the gain (e.g. Eq. 37) becomes tr(H* P H) Pn * -1 Q -1 Pn G ~OQp
The wasteland of random supergravities
NASA Astrophysics Data System (ADS)
Marsh, David; McAllister, Liam; Wrase, Timm
2012-03-01
We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Global sensitivity analysis of multiscale properties of porous materials
NASA Astrophysics Data System (ADS)
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
Bacterial accumulation in viscosity gradients
NASA Astrophysics Data System (ADS)
Waisbord, Nicolas; Guasto, Jeffrey
2016-11-01
Cell motility is greatly modified by fluid rheology. In particular, the physical environments in which cells function, are often characterized by gradients of viscous biopolymers, such as mucus and extracellular matrix, which impact processes ranging from reproduction to digestion to biofilm formation. To understand how spatial heterogeneity of fluid rheology affects the motility and transport of swimming cells, we use hydrogel microfluidic devices to generate viscosity gradients in a simple, polymeric, Newtonian fluid. Using video microscopy, we characterize the random walk motility patterns of model bacteria (Bacillus subtilis), showing that both wild-type ('run-and-tumble') cells and smooth-swimming mutants accumulate in the viscous region of the fluid. Through statistical analysis of individual cell trajectories and body kinematics in both homogeneous and heterogeneous viscous environments, we discriminate passive, physical effects from active sensing processes to explain the observed cell accumulation at the ensemble level.
Characterizations of matrix and operator-valued Φ-entropies, and operator Efron-Stein inequalities.
Cheng, Hao-Chung; Hsieh, Min-Hsiu
2016-03-01
We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19 , 1-30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron-Stein inequality.
PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.
1998-01-01
PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.
Gorobets, Yu I; Gorobets, O Yu
2015-01-01
The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Y.; Xu, X.
2017-12-01
The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.
ERIC Educational Resources Information Center
Montague, Margariete A.
This study investigated the feasibility of concurrently and randomly sampling examinees and items in order to estimate group achievement. Seven 32-item tests reflecting a 640-item universe of simple open sentences were used such that item selection (random, systematic) and assignment (random, systematic) of items (four, eight, sixteen) to forms…
NASA Astrophysics Data System (ADS)
Jordan, Andrew Noble
2002-09-01
In this dissertation, we study the quantum mechanics of classically chaotic dynamical systems. We begin by considering the decoherence effects a quantum chaotic system has on a simple quantum few state system. Typical time evolution of a quantum system whose classical limit is chaotic generates structures in phase space whose size is much smaller than Planck's constant. A naive application of Heisenberg's uncertainty principle indicates that these structures are not physically relevant. However, if we take the quantum chaotic system in question to be an environment which interacts with a simple two state quantum system (qubit), we show that these small phase-space structures cause the qubit to generically lose quantum coherence if and only if the environment has many degrees of freedom, such as a dilute gas. This implies that many-body environments may be crucial for the phenomenon of quantum decoherence. Next, we turn to an analysis of statistical properties of time correlation functions and matrix elements of quantum chaotic systems. A semiclassical evaluation of matrix elements of an operator indicates that the dominant contribution will be related to a classical time correlation function over the energy surface. For a highly chaotic class of dynamics, these correlation functions may be decomposed into sums of Ruelle resonances, which control exponential decay to the ergodic distribution. The theory is illustrated both numerically and theoretically on the Baker map. For this system, we are able to isolate individual Ruelle modes. We further consider dynamical systems whose approach to ergodicity is given by a power law rather than an exponential in time. We propose a billiard with diffusive boundary conditions, whose classical solution may be calculated analytically. We go on to compare the exact solution with an approximation scheme, as well calculate asympotic corrections. Quantum spectral statistics are calculated assuming the validity of the Again, Altshuler and Andreev ansatz. We find singular behavior of the two point spectral correlator in the limit of small spacing. Finally, we analyse the effect that slow decay to ergodicity has on the structure of the quantum propagator, as well as wavefunction localization. We introduce a statistical quantum description of systems that are composed of both an orderly region and a random region. By averaging over the random region only, we find that measures of localization in momentum space semiclassically diverge with the dimension of the Hilbert space. We illustrate this numerically with quantum maps and suggest various other systems where this behavior should be important.
Akemann, G; Bloch, J; Shifrin, L; Wettig, T
2008-01-25
We analyze how individual eigenvalues of the QCD Dirac operator at nonzero quark chemical potential are distributed in the complex plane. Exact and approximate analytical results for both quenched and unquenched distributions are derived from non-Hermitian random matrix theory. When comparing these to quenched lattice QCD spectra close to the origin, excellent agreement is found for zero and nonzero topology at several values of the quark chemical potential. Our analytical results are also applicable to other physical systems in the same symmetry class.
Fidelity under isospectral perturbations: a random matrix study
NASA Astrophysics Data System (ADS)
Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.
2013-07-01
The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.
3D Mueller-matrix mapping of biological optically anisotropic networks
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Ushenko, V. O.; Bodnar, G. B.; Zhytaryuk, V. G.; Prydiy, O. G.; Koval, G.; Lukashevich, I.; Vanchuliak, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.
Multichannel Compressive Sensing MRI Using Noiselet Encoding
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548
Lie, Stein Atle; Eriksen, Hege R; Ursin, Holger; Hagen, Eli Molde
2008-05-01
Analysing and presenting data on different outcomes after sick-leave is challenging. The use of extended statistical methods supplies additional information and allows further exploitation of data. Four hundred and fifty-seven patients, sick-listed for 8-12 weeks for low back pain, were randomized to intervention (n=237) or control (n=220). Outcome was measured as: "sick-listed'', "returned to work'', or "disability pension''. The individuals shifted between the three states between one and 22 times (mean 6.4 times). In a multi-state model, shifting between the states was set up in a transition intensity matrix. The probability of being in any of the states was calculated as a transition probability matrix. The effects of the intervention were modelled using a non-parametric model. There was an effect of the intervention for leaving the state sick-listed and shifting to returned to work (relative risk (RR)=1.27, 95% confidence interval (CI) 1.09- 1.47). The nonparametric estimates showed an effect of the intervention for leaving sick-listed and shifting to returned to work in the first 6 months. We found a protective effect of the intervention for shifting back to sick-listed between 6 and 18 months. The analyses showed that the probability of staying in the state returned to work was not different between the intervention and control groups at the end of the follow-up (3 years). We demonstrate that these alternative analyses give additional results and increase the strength of the analyses. The simple intervention did not decrease the probability of being on sick-leave in the long term; however, it decreased the time that individuals were on sick-leave.
Graczyk, Michelle B.; Duarte Queirós, Sílvio M.
2017-01-01
Employing Random Matrix Theory and Principal Component Analysis techniques, we enlarge our work on the individual and cross-sectional intraday statistical properties of trading volume in financial markets to the study of collective intraday features of that financial observable. Our data consist of the trading volume of the Dow Jones Industrial Average Index components spanning the years between 2003 and 2014. Computing the intraday time dependent correlation matrices and their spectrum of eigenvalues, we show there is a mode ruling the collective behaviour of the trading volume of these stocks whereas the remaining eigenvalues are within the bounds established by random matrix theory, except the second largest eigenvalue which is robustly above the upper bound limit at the opening and slightly above it during the morning-afternoon transition. Taking into account that for price fluctuations it was reported the existence of at least seven significant eigenvalues—and that its autocorrelation function is close to white noise for highly liquid stocks whereas for the trading volume it lasts significantly for more than 2 hours —, our finding goes against any expectation based on those features, even when we take into account the Epps effect. In addition, the weight of the trading volume collective mode is intraday dependent; its value increases as the trading session advances with its eigenversor approaching the uniform vector as well, which corresponds to a soar in the behavioural homogeneity. With respect to the nonstationarity of the collective features of the trading volume we observe that after the financial crisis of 2008 the coherence function shows the emergence of an upset profile with large fluctuations from that year on, a property that concurs with the modification of the average trading volume profile we noted in our previous individual analysis. PMID:28753676
Graczyk, Michelle B; Duarte Queirós, Sílvio M
2017-01-01
Employing Random Matrix Theory and Principal Component Analysis techniques, we enlarge our work on the individual and cross-sectional intraday statistical properties of trading volume in financial markets to the study of collective intraday features of that financial observable. Our data consist of the trading volume of the Dow Jones Industrial Average Index components spanning the years between 2003 and 2014. Computing the intraday time dependent correlation matrices and their spectrum of eigenvalues, we show there is a mode ruling the collective behaviour of the trading volume of these stocks whereas the remaining eigenvalues are within the bounds established by random matrix theory, except the second largest eigenvalue which is robustly above the upper bound limit at the opening and slightly above it during the morning-afternoon transition. Taking into account that for price fluctuations it was reported the existence of at least seven significant eigenvalues-and that its autocorrelation function is close to white noise for highly liquid stocks whereas for the trading volume it lasts significantly for more than 2 hours -, our finding goes against any expectation based on those features, even when we take into account the Epps effect. In addition, the weight of the trading volume collective mode is intraday dependent; its value increases as the trading session advances with its eigenversor approaching the uniform vector as well, which corresponds to a soar in the behavioural homogeneity. With respect to the nonstationarity of the collective features of the trading volume we observe that after the financial crisis of 2008 the coherence function shows the emergence of an upset profile with large fluctuations from that year on, a property that concurs with the modification of the average trading volume profile we noted in our previous individual analysis.
The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2010-01-01
This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…
Randomization Procedures Applied to Analysis of Ballistic Data
1991-06-01
test,;;15. NUMBER OF PAGES data analysis; computationally intensive statistics ; randomization tests; permutation tests; 16 nonparametric statistics ...be 0.13. 8 Any reasonable statistical procedure would fail to support the notion of improvement of dynamic over standard indexing based on this data ...AD-A238 389 TECHNICAL REPORT BRL-TR-3245 iBRL RANDOMIZATION PROCEDURES APPLIED TO ANALYSIS OF BALLISTIC DATA MALCOLM S. TAYLOR BARRY A. BODT - JUNE
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
NASA Astrophysics Data System (ADS)
Kassem, M.; Soize, C.; Gagliardini, L.
2009-06-01
In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.
Chaibub Neto, Elias
2015-01-01
In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965
Modeling cometary photopolarimetric characteristics with Sh-matrix method
NASA Astrophysics Data System (ADS)
Kolokolova, L.; Petrov, D.
2017-12-01
Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.
A minimum drives automatic target definition procedure for multi-axis random control testing
NASA Astrophysics Data System (ADS)
Musella, Umberto; D'Elia, Giacomo; Carrella, Alex; Peeters, Bart; Mucchi, Emiliano; Marulo, Francesco; Guillaume, Patrick
2018-07-01
Multiple-Input Multiple-Output (MIMO) vibration control tests are able to closely replicate, via shakers excitation, the vibration environment that a structure needs to withstand during its operational life. This feature is fundamental to accurately verify the experienced stress state, and ultimately the fatigue life, of the tested structure. In case of MIMO random tests, the control target is a full reference Spectral Density Matrix in the frequency band of interest. The diagonal terms are the Power Spectral Densities (PSDs), representative for the acceleration operational levels, and the off-diagonal terms are the Cross Spectral Densities (CSDs). The specifications of random vibration tests are however often given in terms of PSDs only, coming from a legacy of single axis testing. Information about the CSDs is often missing. An accurate definition of the CSD profiles can further enhance the MIMO random testing practice, as these terms influence both the responses and the shaker's voltages (the so-called drives). The challenges are linked to the algebraic constraint that the full reference matrix must be positive semi-definite in the entire bandwidth, with no flexibility in modifying the given PSDs. This paper proposes a newly developed method that automatically provides the full reference matrix without modifying the PSDs, considered as test specifications. The innovative feature is the capability of minimizing the drives required to match the reference PSDs and, at the same time, to directly guarantee that the obtained full matrix is positive semi-definite. The drives minimization aims on one hand to reach the fixed test specifications without stressing the delicate excitation system; on the other hand it potentially allows to further increase the test levels. The detailed analytic derivation and implementation steps of the proposed method are followed by real-life testing considering different scenarios.
Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco
2014-06-01
Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli
2016-10-01
Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.
Statistical significance test for transition matrices of atmospheric Markov chains
NASA Technical Reports Server (NTRS)
Vautard, Robert; Mo, Kingtse C.; Ghil, Michael
1990-01-01
Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.
Characterizations of matrix and operator-valued Φ-entropies, and operator Efron–Stein inequalities
Cheng, Hao-Chung; Hsieh, Min-Hsiu
2016-01-01
We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19, 1–30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron–Stein inequality. PMID:27118909
QCD dirac operator at nonzero chemical potential: lattice data and matrix model.
Akemann, Gernot; Wettig, Tilo
2004-03-12
Recently, a non-Hermitian chiral random matrix model was proposed to describe the eigenvalues of the QCD Dirac operator at nonzero chemical potential. This matrix model can be constructed from QCD by mapping it to an equivalent matrix model which has the same symmetries as QCD with chemical potential. Its microscopic spectral correlations are conjectured to be identical to those of the QCD Dirac operator. We investigate this conjecture by comparing large ensembles of Dirac eigenvalues in quenched SU(3) lattice QCD at a nonzero chemical potential to the analytical predictions of the matrix model. Excellent agreement is found in the two regimes of weak and strong non-Hermiticity, for several different lattice volumes.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Özkal, Can Burak; Frontistis, Zacharias; Antonopoulou, Maria; Konstantinou, Ioannis; Mantzavinos, Dionissios; Meriç, Süreyya
2017-10-01
Photocatalytic degradation of sulfamethoxazole (SMX) antibiotic has been studied under recycling batch and homogeneous flow conditions in a thin-film coated immobilized system namely parallel-plate (PPL) reactor. Experimentally designed, statistically evaluated with a factorial design (FD) approach with intent to provide a mathematical model takes into account the parameters influencing process performance. Initial antibiotic concentration, UV energy level, irradiated surface area, water matrix (ultrapure and secondary treated wastewater) and time, were defined as model parameters. A full of 2 5 experimental design was consisted of 32 random experiments. PPL reactor test experiments were carried out in order to set boundary levels for hydraulic, volumetric and defined defined process parameters. TTIP based thin-film with polyethylene glycol+TiO 2 additives were fabricated according to pre-described methodology. Antibiotic degradation was monitored by High Performance Liquid Chromatography analysis while the degradation products were specified by LC-TOF-MS analysis. Acute toxicity of untreated and treated SMX solutions was tested by standard Daphnia magna method. Based on the obtained mathematical model, the response of the immobilized PC system is described with a polynomial equation. The statistically significant positive effects are initial SMX concentration, process time and the combined effect of both, while combined effect of water matrix and irradiated surface area displays an adverse effect on the rate of antibiotic degradation by photocatalytic oxidation. Process efficiency and the validity of the acquired mathematical model was also verified for levofloxacin and cefaclor antibiotics. Immobilized PC degradation in PPL reactor configuration was found capable of providing reduced effluent toxicity by simultaneous degradation of SMX parent compound and TBPs. Copyright © 2017. Published by Elsevier B.V.
Lemos, George Azevedo; Rissi, Renato; de Souza Pires, Ivan Luiz; de Oliveira, Letícia Prado; de Aro, Andrea Aparecida; Pimentel, Edson Rosa; Palomari, Evanisi Teresa
2016-08-01
The objective of this study was to characterize morphological and biochemistry action of low-level laser therapy (LLLT) on induced arthritis in the temporomandibular joint (TMJ) of rats. Twenty-four male Wistar rats were randomly divided into groups with 12 animals each: (AG) group with arthritis induced in the left TMJ and (LG) group with arthritis induced in the left TMJ and treated with LLLT (830 nm, 30 mW, 3 J/cm(2)). Right TMJs in the AG group were used as noninjected control group (CG). Arthritis was induced by intra-articular injection of 50 μl Complete Freund's Adjuvant (CFA) and LLLT began 1 week after arthritis induction. Histopathological analysis was performed using sections stained with hematoxylin-eosin, Toluidine Blue, and picrosirius. Biochemical analysis was determined by the total concentration of sulfated glycosaminoglycans (GAGs) and evaluation of matrix metalloproteinases (MMP-2 and MMP-9). Statistical analysis was performed using paired and unpaired t tests, with p < 0.05. Compared to AG, LG had minor histopathological changes in the TMJ, smaller thickness of the articular disc in the anterior (p < 0.0001), middle (p < 0.0001) and posterior regions (p < 0.0001), high birefringence of collagen fibers in the anterior (p < 0.0001), middle (p < 0.0001) and posterior regions (p < 0.0001) on the articular disc, and statistically lower activity of MMP-2 latent (p < 0.0001), MMP-2 active (P = 0.02), MMP-9 latent (p < 0.0001), and MMP-9 active (p < 0.0001). These results suggest that LLLT can increase the remodeling and enhancing tissue repair in TMJ with induced arthritis.
Random acoustic metamaterial with a subwavelength dipolar resonance.
Duranteau, Mickaël; Valier-Brasier, Tony; Conoir, Jean-Marc; Wunenburger, Régis
2016-06-01
The effective velocity and attenuation of longitudinal waves through random dispersions of rigid, tungsten-carbide beads in an elastic matrix made of epoxy resin in the range of beads volume fraction 2%-10% are determined experimentally. The multiple scattering model proposed by Luppé, Conoir, and Norris [J. Acoust. Soc. Am. 131(2), 1113-1120 (2012)], which fully takes into account the elastic nature of the matrix and the associated mode conversions, accurately describes the measurements. Theoretical calculations show that the rigid particles display a local, dipolar resonance which shares several features with Minnaert resonance of bubbly liquids and with the dipolar resonance of core-shell particles. Moreover, for the samples under study, the main cause of smoothing of the dipolar resonance of the scatterers and the associated variations of the effective mass density of the dispersions is elastic relaxation, i.e., the finite time required for the shear stresses associated to the translational motion of the scatterers to propagate through the matrix. It is shown that its influence is governed solely by the value of the particle to matrix mass density contrast.
NASA Astrophysics Data System (ADS)
Fang, Dong-Liang; Faessler, Amand; Šimkovic, Fedor
2018-04-01
In this paper, with restored isospin symmetry, we evaluated the neutrinoless double-β -decay nuclear matrix elements for 76Ge, 82Se, 130Te, 136Xe, and 150Nd for both the light and heavy neutrino mass mechanisms using the deformed quasiparticle random-phase approximation approach with realistic forces. We give detailed decompositions of the nuclear matrix elements over different intermediate states and nucleon pairs, and discuss how these decompositions are affected by the model space truncations. Compared to the spherical calculations, our results show reductions from 30 % to about 60 % of the nuclear matrix elements for the calculated isotopes mainly due to the presence of the BCS overlap factor between the initial and final ground states. The comparison between different nucleon-nucleon (NN) forces with corresponding short-range correlations shows that the choice of the NN force gives roughly 20 % deviations for the light exchange neutrino mechanism and much larger deviations for the heavy neutrino exchange mechanism.
NASA Technical Reports Server (NTRS)
Bollman, W. E.; Chadwick, C.
1982-01-01
A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.
Chevalier, Grégoire; Cherkaoui, Selma; Kruk, Hanna; Bensaïd, Xavier; Danan, Marc
A xenogeneic collagen matrix recently has been suggested as an alternative to connective tissue graft for the treatment of gingival recession. The matrix avoids the second surgical site, and as a consequence could decrease surgical morbidity. This new matrix was used in various clinical situations and compared to connective tissue graft (CTG) in a split-mouth design case series. A total of 17 recessions were treated with a coronally advanced flap, 9 with CTG, and 8 with the matrix. Mean recession reduction was 2.00 mm with the CTG and 2.00 mm with the matrix. No significant statistical differences between the techniques were observed in this case report.
Chevalier, Grégoire; Cherkaoui, Selma; Kruk, Hanna; Bensaïd, Xavier; Danan, Marc
2016-08-24
A xenogeneic collagen matrix recently has been suggested as an alternative to connective tissue graft for the treatment of gingival recession. The matrix avoids the second surgical site, and as a consequence could decrease surgical morbidity. This new matrix was used in various clinical situations and compared to connective tissue graft (CTG) in a split-mouth design case series. A total of 17 recessions were treated with a coronally advanced flap, 9 with CTG, and 8 with the matrix. Mean recession reduction was 2.00 mm with the CTG and 2.00 mm with the matrix. No significant statistical differences between the techniques were observed in this case report.
Schneider, David; Schmidlin, Patrick R; Philipp, Alexander; Annen, Beat M; Ronay, Valerie; Hämmerle, Christoph H F; Attin, Thomas; Jung, Ronald E
2014-06-01
To volumetrically evaluate soft tissue changes of different ridge preservation techniques compared to spontaneous healing 6 months after tooth extraction. In each of 40 patients, one single-rooted tooth was extracted and four treatment modalities were randomly assigned to the following groups (n = 10 each): A) ß-tricalcium-phosphate-particles with a polylactid coating (ß-TCP), B) demineralized bovine bone mineral with 10% collagen covered with a collagen matrix (DBBM-C/CM), C) DBBM with 10% collagen covered with an autogenous soft tissue punch graft (DBBM-C/PG), D) spontaneous healing (control). Impressions were obtained before extraction and 6 months later, casts were digitized and volumetric changes at the buccal soft tissues were determined. One-way anova was performed and pair-wise Wilcoxon rank sum test with Bonferroni-Holm method was applied for comparison of differences between two groups. After 6 months, horizontal contour changes accounted for -1.7 ± 0.7 mm (A), -1.2 ± 0.5 mm (B), -1.2 ± 0.7 mm (C) and -1.8 ± 0.8 mm (D). None of the group comparisons reached statistical significance. Six months after tooth extraction all groups revealed a horizontal volume change in the buccal soft tissue contour. Application of DBBM-C/CM or DBBM-C/PG reduced the amount of volume resorption compared to ß-TCP or spontaneous healing without reaching statistically significant difference. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Random Matrix Theory in molecular dynamics analysis.
Palese, Luigi Leonardo
2015-01-01
It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.
Nevins, Marc L; Camelo, Marcelo; Schupbach, Peter; Nevins, Myron; Kim, Soo-Woo; Kim, David M
2011-01-01
The objective of this study was to assess the osseous healing of buccal plate extraction socket defects. There were four cohorts: group A (mineral collagen bone substitute [MCBS] scaffold alone), group B (MCBS with recombinant human platelet-derived growth factor BB [rhPDGF-BB; 0.3 mg/mL]), group C (MCBS with enamel matrix derivative [EMD]), and group D (combination of EMD with bone ceramic). The primary outcome of bone quality was evaluated using light microscopy, backscatter scanning electron microscopy, and histomorphometrics. Reentry surgery provided an opportunity for clinical observation of the healed ridge morphology. Sixteen patients with buccal wall extraction socket defects were randomized into four treatment groups of equal size. Grafting was provided at the time of extraction with advancement of the buccal flap for primary closure. A trephine core biopsy of the implant site preparation was performed after 5 months for implant placement. Histologic examination identified new bone healing around the biomaterial scaffolds. Statistically significant differences in new bone formation were not observed among the treatment groups. There was a histomorphometric trend toward more new bone for the rhPDGF-BB-treated group (group B). This group had the most favorable ridge morphology for optimal implant placement.
Tal, Haim; Moses, Ofer; Zohar, Ron; Meir, Haya; Nemcovsky, Carlos
2002-12-01
Acellular dermal matrix allograft (ADMA) has successfully been applied as a substitute for free connective tissue grafts (CTG) in various periodontal procedures, including root coverage. The purpose of this study was to clinically compare the efficiency of ADMA and CTG in the treatment of gingival recessions > or = 4 mm. Seven patients with bilateral recession lesions participated. Fourteen teeth presenting gingival recessions > or = 4 mm were randomly treated with ADMA or CTG covered by coronally advanced flaps. Recession, probing depth, and width of keratinized tissue were measured preoperatively and 12 months postoperatively. Changes in these clinical parameters were calculated within and compared between groups and analyzed statistically. Baseline recession, probing depth, and keratinized tissue width were similar for both groups. At 12 months, root coverage gain was 4.57 mm (89.1%) versus 4.29 mm (88.7%) (P = NS), and keratinized tissue gain was 0.86 mm (36%) versus 2.14 mm (107%) (P < 0.05) for ADMA and CTG, respectively. Probing depth remained unchanged (0.22 mm/0 mm), with no difference between the groups. Recession defects may be covered using ADMA or CTG, with no practical difference. However, CTG results in significantly greater gain of keratinized gingiva.
Parallel solution of the symmetric tridiagonal eigenproblem. Research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-10-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Parallel solution of the symmetric tridiagonal eigenproblem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1989-01-01
This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less
Proffen, Benedikt L.; Perrone, Gabriel S.; Fleming, Braden C.; Sieker, Jakob T.; Kramer, Joshua; Hawes, Michael L.; Badger, Gary J.; Murray, Martha M.
2015-01-01
Purpose Extra-cellular matrix (ECM) scaffolds have been used to enhance anterior cruciate ligament (ACL) repair in large animal models. To translate this technology to clinical care, identifying a method, which effectively sterilizes the material without significantly impairing in vivo function, is desirable. Methods 16 Yorkshire pigs underwent ACL transection and were randomly assigned to bridge-enhanced ACL repair – primary suture repair of the ACL with addition of autologous blood soaked ECM scaffold - with either 1) an aseptically processed ECM scaffold, or 2) an electron beam irradiated ECM scaffold. Primary outcome measures included sterility of the scaffold and biomechanical properties of the scaffold itself and the repaired ligament at eight weeks after surgery. Results Scaffolds treated with 15kGy electron beam irradiation had no bacterial or fungal growth noted, while aseptically processed scaffolds had bacterial growth in all tested samples. The mean biomechanical properties of the scaffold and healing ligament were lower in the electron beam group; however, differences were not statistically significant. Conclusions Electron beam irradiation was able to effectively sterilize the scaffolds. In addition, this technique had only a minimal impact on the in vivo function of the scaffolds when used for ligament healing in the porcine model. PMID:25676876
Temporal evolution of financial-market correlations.
Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
Temporal evolution of financial-market correlations
NASA Astrophysics Data System (ADS)
Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.
2011-08-01
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.
ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.
Kim, Seongho
2015-11-01
Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
A pedagogical derivation of the matrix element method in particle physics data analysis
NASA Astrophysics Data System (ADS)
Sumowidagdo, Suharyo
2018-03-01
The matrix element method provides a direct connection between the underlying theory of particle physics processes and detector-level physical observables. I am presenting a pedagogically-oriented derivation of the matrix element method, drawing from elementary concepts in probability theory, statistics, and the process of experimental measurements. The level of treatment should be suitable for beginning research student in phenomenology and experimental high energy physics.
Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection
NASA Astrophysics Data System (ADS)
Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd
2015-02-01
Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.
Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.
Martínez, C A; Khare, K; Rahman, S; Elzo, M A
2017-10-01
Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.
Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements
NASA Astrophysics Data System (ADS)
Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.
2016-04-01
We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2011-01-01
The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.
NASA Astrophysics Data System (ADS)
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Studies on Relaxation Behavior of Corona Poled Aromatic Dipolar Molecules in a Polymer Matrix
1990-08-03
concentration upto 30 weight percent. Orientation As expected optically responsive molecules are randomly oriented in the polymer matrix although a small amount...INSERT Figure 4 The retention of SH intensity of the small molecule such as MNA was found to be very poor in the PMMA matrix while the larger rodlike...Polym. Prepr. Am. Chem. Soc., Div. Polym. Chem. 24(2), 309 (1983). 16.- H. Ringsdorf and H. W. Schmidt. Makromol. Chem. 185, 1327 (1984). 17. S. Musikant
Comprehensive T-matrix Reference Database: A 2009-2011 Update
NASA Technical Reports Server (NTRS)
Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.
2012-01-01
The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.
SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations
2013-01-01
Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408
SUNPLIN: simulation with uncertainty for phylogenetic investigations.
Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F
2013-11-15
Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL
2007-12-15
We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less
Human Inferences about Sequences: A Minimal Transition Probability Model
2016-01-01
The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge. PMID:28030543
Noise Response Data Reveal Novel Controllability Gramian for Nonlinear Network Dynamics
Kashima, Kenji
2016-01-01
Control of nonlinear large-scale dynamical networks, e.g., collective behavior of agents interacting via a scale-free connection topology, is a central problem in many scientific and engineering fields. For the linear version of this problem, the so-called controllability Gramian has played an important role to quantify how effectively the dynamical states are reachable by a suitable driving input. In this paper, we first extend the notion of the controllability Gramian to nonlinear dynamics in terms of the Gibbs distribution. Next, we show that, when the networks are open to environmental noise, the newly defined Gramian is equal to the covariance matrix associated with randomly excited, but uncontrolled, dynamical state trajectories. This fact theoretically justifies a simple Monte Carlo simulation that can extract effectively controllable subdynamics in nonlinear complex networks. In addition, the result provides a novel insight into the relationship between controllability and statistical mechanics. PMID:27264780
A Monte Carlo simulation based inverse propagation method for stochastic model updating
NASA Astrophysics Data System (ADS)
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
Experimental quantum compressed sensing for a seven-qubit system
Riofrío, C. A.; Gross, D.; Flammia, S. T.; Monz, T.; Nigg, D.; Blatt, R.; Eisert, J.
2017-01-01
Well-controlled quantum devices with their increasing system size face a new roadblock hindering further development of quantum technologies. The effort of quantum tomography—the reconstruction of states and processes of a quantum device—scales unfavourably: state-of-the-art systems can no longer be characterized. Quantum compressed sensing mitigates this problem by reconstructing states from incomplete data. Here we present an experimental implementation of compressed tomography of a seven-qubit system—a topological colour code prepared in a trapped ion architecture. We are in the highly incomplete—127 Pauli basis measurement settings—and highly noisy—100 repetitions each—regime. Originally, compressed sensing was advocated for states with few non-zero eigenvalues. We argue that low-rank estimates are appropriate in general since statistical noise enables reliable reconstruction of only the leading eigenvectors. The remaining eigenvectors behave consistently with a random-matrix model that carries no information about the true state. PMID:28513587
Structuring Stokes correlation functions using vector-vortex beam
NASA Astrophysics Data System (ADS)
Kumar, Vijay; Anwar, Ali; Singh, R. P.
2018-01-01
Higher order statistical correlations of the optical vector speckle field, formed due to scattering of a vector-vortex beam, are explored. Here, we report on the experimental construction of the Stokes parameters covariance matrix, consisting of all possible spatial Stokes parameters correlation functions. We also propose and experimentally realize a new Stokes correlation functions called Stokes field auto correlation functions. It is observed that the Stokes correlation functions of the vector-vortex beam will be reflected in the respective Stokes correlation functions of the corresponding vector speckle field. The major advantage of proposing Stokes correlation functions is that the Stokes correlation function can be easily tuned by manipulating the polarization of vector-vortex beam used to generate vector speckle field and to get the phase information directly from the intensity measurements. Moreover, this approach leads to a complete experimental Stokes characterization of a broad range of random fields.
Universal statistics of vortex tangles in three-dimensional random waves
NASA Astrophysics Data System (ADS)
Taylor, Alexander J.
2018-02-01
The tangled nodal lines (wave vortices) in random, three-dimensional wavefields are studied as an exemplar of a fractal loop soup. Their statistics are a three-dimensional counterpart to the characteristic random behaviour of nodal domains in quantum chaos, but in three dimensions the filaments can wind around one another to give distinctly different large scale behaviours. By tracing numerically the structure of the vortices, their conformations are shown to follow recent analytical predictions for random vortex tangles with periodic boundaries, where the local disorder of the model ‘averages out’ to produce large scale power law scaling relations whose universality classes do not depend on the local physics. These results explain previous numerical measurements in terms of an explicit effect of the periodic boundaries, where the statistics of the vortices are strongly affected by the large scale connectedness of the system even at arbitrarily high energies. The statistics are investigated primarily for static (monochromatic) wavefields, but the analytical results are further shown to directly describe the reconnection statistics of vortices evolving in certain dynamic systems, or occurring during random perturbations of the static configuration.
SPARSKIT: A basic tool kit for sparse matrix computations
NASA Technical Reports Server (NTRS)
Saad, Youcef
1990-01-01
Presented here are the main features of a tool package for manipulating and working with sparse matrices. One of the goals of the package is to provide basic tools to facilitate the exchange of software and data between researchers in sparse matrix computations. The starting point is the Harwell/Boeing collection of matrices for which the authors provide a number of tools. Among other things, the package provides programs for converting data structures, printing simple statistics on a matrix, plotting a matrix profile, and performing linear algebra operations with sparse matrices.
A Note on Parameters of Random Substitutions by γ-Diagonal Matrices
NASA Astrophysics Data System (ADS)
Kang, Ju-Sung
Random substitutions are very useful and practical method for privacy-preserving schemes. In this paper we obtain the exact relationship between the estimation errors and three parameters used in the random substitutions, namely the privacy assurance metric γ, the total number n of data records, and the size N of transition matrix. We also demonstrate some simulations concerning the theoretical result.
Structure of collagen-glycosaminoglycan matrix and the influence to its integrity and stability.
Bi, Yuying; Patra, Prabir; Faezipour, Miad
2014-01-01
Glycosaminoglycan (GAG) is a chain-like disaccharide that is linked to polypeptide core to connect two collagen fibrils/fibers and provide the intermolecular force in Collagen-GAG matrix (C-G matrix). Thus, the distribution of GAG in C-G matrix contributes to the integrity and mechanical properties of the matrix and related tissue. This paper analyzes the transverse isotropic distribution of GAG in C-G matrix. The angle of GAGs related to collagen fibrils is used as parameters to qualify the GAGs isotropic characteristic in both 3D and 2D rendering. Statistical results included that over one third of GAGs were perpendicular directed to collagen fibril with symmetrical distribution for both 3D matrix and 2D plane cross through collagen fibrils. The three factors tested in this paper: collagen radius, collagen distribution, and GAGs density, were not statistically significant for the strength of Collagen-GAG matrix in 3D rendering. However in 2D rendering, a significant factor found was the radius of collagen in matrix for the GAGs directed to orthogonal plane of Collagen-GAG matrix. Between two cross-section selected from Collagen-GAG matrix model, the plane cross through collagen fibrils was symmetrically distributed but the total percentage of perpendicular directed GAG was deducted by decreasing collagen radius. There were some symmetry features of GAGs angle distribution in selected 2D plane that passed through space between collagen fibrils, but most models showed multiple peaks in GAGs angle distribution. With less GAGs directed to perpendicular of collagen fibril, strength in collagen cross-section weakened. Collagen distribution was also a factor that influences GAGs angle distribution in 2D rendering. True hexagonal collagen packaging is reported in this paper to have less strength at collagen cross-section compared to quasi-hexagonal collagen arrangement. In this work focus is on GAGs matrix within the collagen and its relevance to anisotropy.
Local dependence in random graph models: characterization, properties and statistical inference
Schweinberger, Michael; Handcock, Mark S.
2015-01-01
Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142
Subjective randomness as statistical inference.
Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B
2018-06-01
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.
A statistical approach to selecting and confirming validation targets in -omics experiments
2012-01-01
Background Genomic technologies are, by their very nature, designed for hypothesis generation. In some cases, the hypotheses that are generated require that genome scientists confirm findings about specific genes or proteins. But one major advantage of high-throughput technology is that global genetic, genomic, transcriptomic, and proteomic behaviors can be observed. Manual confirmation of every statistically significant genomic result is prohibitively expensive. This has led researchers in genomics to adopt the strategy of confirming only a handful of the most statistically significant results, a small subset chosen for biological interest, or a small random subset. But there is no standard approach for selecting and quantitatively evaluating validation targets. Results Here we present a new statistical method and approach for statistically validating lists of significant results based on confirming only a small random sample. We apply our statistical method to show that the usual practice of confirming only the most statistically significant results does not statistically validate result lists. We analyze an extensively validated RNA-sequencing experiment to show that confirming a random subset can statistically validate entire lists of significant results. Finally, we analyze multiple publicly available microarray experiments to show that statistically validating random samples can both (i) provide evidence to confirm long gene lists and (ii) save thousands of dollars and hundreds of hours of labor over manual validation of each significant result. Conclusions For high-throughput -omics studies, statistical validation is a cost-effective and statistically valid approach to confirming lists of significant results. PMID:22738145
Optimized Projection Matrix for Compressive Sensing
NASA Astrophysics Data System (ADS)
Xu, Jianping; Pi, Yiming; Cao, Zongjie
2010-12-01
Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.
On the Wigner law in dilute random matrices
NASA Astrophysics Data System (ADS)
Khorunzhy, A.; Rodgers, G. J.
1998-12-01
We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.
The Matrix Analogies Test: A Validity Study with the K-ABC.
ERIC Educational Resources Information Center
Smith, Douglas K.
The Matrix Analogies Test-Expanded Form (MAT-EF) and Kaufman Assessment Battery for Children (K-ABC) were administered in counterbalanced order to two randomly selected samples of students in grades 2 through 5. The MAT-EF was recently developed to measure non-verbal reasoning. The samples included 26 non-handicapped second graders in a rural…
Seabed mapping and characterization of sediment variability using the usSEABED data base
Goff, J.A.; Jenkins, C.J.; Jeffress, Williams S.
2008-01-01
We present a methodology for statistical analysis of randomly located marine sediment point data, and apply it to the US continental shelf portions of usSEABED mean grain size records. The usSEABED database, like many modern, large environmental datasets, is heterogeneous and interdisciplinary. We statistically test the database as a source of mean grain size data, and from it provide a first examination of regional seafloor sediment variability across the entire US continental shelf. Data derived from laboratory analyses ("extracted") and from word-based descriptions ("parsed") are treated separately, and they are compared statistically and deterministically. Data records are selected for spatial analysis by their location within sample regions: polygonal areas defined in ArcGIS chosen by geography, water depth, and data sufficiency. We derive isotropic, binned semivariograms from the data, and invert these for estimates of noise variance, field variance, and decorrelation distance. The highly erratic nature of the semivariograms is a result both of the random locations of the data and of the high level of data uncertainty (noise). This decorrelates the data covariance matrix for the inversion, and largely prevents robust estimation of the fractal dimension. Our comparison of the extracted and parsed mean grain size data demonstrates important differences between the two. In particular, extracted measurements generally produce finer mean grain sizes, lower noise variance, and lower field variance than parsed values. Such relationships can be used to derive a regionally dependent conversion factor between the two. Our analysis of sample regions on the US continental shelf revealed considerable geographic variability in the estimated statistical parameters of field variance and decorrelation distance. Some regional relationships are evident, and overall there is a tendency for field variance to be higher where the average mean grain size is finer grained. Surprisingly, parsed and extracted noise magnitudes correlate with each other, which may indicate that some portion of the data variability that we identify as "noise" is caused by real grain size variability at very short scales. Our analyses demonstrate that by applying a bias-correction proxy, usSEABED data can be used to generate reliable interpolated maps of regional mean grain size and sediment character.
Effect of Oral Re-esterified Omega-3 Nutritional Supplementation on Dry Eyes.
Epitropoulos, Alice T; Donnenfeld, Eric D; Shah, Zubin A; Holland, Edward J; Gross, Michael; Faulkner, William J; Matossian, Cynthia; Lane, Stephen S; Toyos, Melissa; Bucci, Frank A; Perry, Henry D
2016-09-01
To assess the effect of oral re-esterified omega-3 fatty acids on tear osmolarity, matrix metalloproteinase-9 (MMP-9), tear break-up time (TBUT), Ocular Surface Disease Index (OSDI), fluorescein corneal staining, Schirmer score, meibomian gland dysfunction (MGD) stage and omega-3 index in subjects with dry eyes and confirmed MGD. This was a multicenter, prospective, interventional, placebo-controlled, double-masked study. Subjects were randomized to receive 4 softgels containing a total of 1680 mg of eicosapentaenoic acid/560 mg of docosahexaenoic acid or a control of 3136 mg of linoleic acid, daily for 12 weeks. Subjects were measured at baseline, week 6, and week 12 for tear osmolarity, TBUT, OSDI, fluorescein corneal staining, and Schirmer test with anesthesia. MMP-9 testing and omega-3 index were done at baseline and at 12 weeks. One hundred five subjects completed the study. They were randomized to omega-3 (n = 54) and control group (n = 51). Statistically significant reduction in tear osmolarity was observed in the omega-3 group versus control group at week 6 (-16.8 ± 2.6 vs. -9.0 ± 2.7 mOsm/L, P = 0.042) and week 12 (-19.4 ± 2.7 vs. -8.3 ± 2.8 mOsm/L, P = 0.004). At 12 weeks, a statistically significant increase in omega-3 index levels (P < 0.001) and TBUT (3.5 ± 0.5 s vs. 1.2 ± 0.5 s, P = 0.002) was also observed. Omega-3 group experienced a significant reduction in MMP-9 positivity versus control group (67.9% vs. 35.0%, P = 0.024) and OSDI scores decreased significantly in omega-3 (-17.0 ± 2.6) versus control group (-5.0 ± 2.7, P = 0.002). Oral consumption of re-esterified omega-3 fatty acids is associated with statistically significant improvement in tear osmolarity, omega-3 index levels, TBUT, MMP-9, and OSDI symptom scores.
Aslanides, Ioannis M; Selimis, Vasilis D; Bessis, Nikolaos V; Georgoudis, Panagiotis N
2015-01-01
We report our experience with the use of the matrix regenerating agent (RGTA) Cacicol(®) after reverse transepithelial all-surface laser ablation (ASLA)-SCHWIND to assess the safety, efficacy, pain, and epithelial healing. Forty eyes of 20 myopic patients were prospectively recruited to a randomized fellow eye study. Patients underwent transepithelial ASLA in both eyes, with one of the eyes randomly assigned to the use of the RGTA Cacicol. Postoperative pain and vision were subjectively assessed with the use of a questionnaire on the operative day, at 24 hours, 48 hours and 72 hours. Epithelial defect area size was measured at 24 hours, 48 hours, and 72 hours. Uncorrected distance visual acuity (UDVA) and corrected distance visual acuity (CDVA) were assessed at 1 month. Mean UDVA at 1 month was LogMAR 0.028. The epithelial defect area was 10.91 mm(2) and 13.28 mm(2) at 24 hours and 1.39 mm(2) and 1.24 mm(2) at 48 hours for treated and nontreated eyes, respectively. Overall, 50% and 65% of treated and nontreated eyes healed by 48 hours. There was no statistically significant difference in the subjective vision between the groups, although vision of patients in the RGTA group was reported to be better. Pain scores were better at 24 hours and 48 hours in the RGTA group but with no statistically significant difference. The use of RGTA Cacicol shows faster epithelial recovery after transepithelial ASLA for myopia. Subjectively reported scores of pain and subjective vision were better in the RGTA group, although the difference was not statistically significant. There seems to be a consensual acceleration of epithelial healing even in eyes that did not receive treatment. There were no adverse events and no incidents of inflammation, delayed healing, or haze.
Aslanides, Ioannis M; Selimis, Vasilis D; Bessis, Nikolaos V; Georgoudis, Panagiotis N
2015-01-01
Purpose We report our experience with the use of the matrix regenerating agent (RGTA) Cacicol® after reverse transepithelial all-surface laser ablation (ASLA)-SCHWIND to assess the safety, efficacy, pain, and epithelial healing. Methods Forty eyes of 20 myopic patients were prospectively recruited to a randomized fellow eye study. Patients underwent transepithelial ASLA in both eyes, with one of the eyes randomly assigned to the use of the RGTA Cacicol. Postoperative pain and vision were subjectively assessed with the use of a questionnaire on the operative day, at 24 hours, 48 hours and 72 hours. Epithelial defect area size was measured at 24 hours, 48 hours, and 72 hours. Uncorrected distance visual acuity (UDVA) and corrected distance visual acuity (CDVA) were assessed at 1 month. Results Mean UDVA at 1 month was LogMAR 0.028. The epithelial defect area was 10.91 mm2 and 13.28 mm2 at 24 hours and 1.39 mm2 and 1.24 mm2 at 48 hours for treated and nontreated eyes, respectively. Overall, 50% and 65% of treated and nontreated eyes healed by 48 hours. There was no statistically significant difference in the subjective vision between the groups, although vision of patients in the RGTA group was reported to be better. Pain scores were better at 24 hours and 48 hours in the RGTA group but with no statistically significant difference. Conclusion The use of RGTA Cacicol shows faster epithelial recovery after transepithelial ASLA for myopia. Subjectively reported scores of pain and subjective vision were better in the RGTA group, although the difference was not statistically significant. There seems to be a consensual acceleration of epithelial healing even in eyes that did not receive treatment. There were no adverse events and no incidents of inflammation, delayed healing, or haze. PMID:25931809
Simulation Study of Evacuation Control Center Operations Analysis
2011-06-01
28 4.3 Baseline Manning (Runs 1, 2, & 3) . . . . . . . . . . . . 30 4.3.1 Baseline Statistics Interpretation...46 Appendix B. Key Statistic Matrix: Runs 1-12 . . . . . . . . . . . . . 48 Appendix C. Blue Dart...Completion Time . . . 33 11. Paired T result - Run 5 v. Run 6: ECC Completion Time . . . 35 12. Key Statistics : Run 3 vs. Run 9
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
CMV matrices in random matrix theory and integrable systems: a survey
NASA Astrophysics Data System (ADS)
Nenciu, Irina
2006-07-01
We present a survey of recent results concerning a remarkable class of unitary matrices, the CMV matrices. We are particularly interested in the role they play in the theory of random matrices and integrable systems. Throughout the paper we also emphasize the analogies and connections to Jacobi matrices.
Lee, Jeffrey S; Cleaver, Gerald B
2017-10-01
In this note, the Cosmic Microwave Background (CMB) Radiation is shown to be capable of functioning as a Random Bit Generator, and constitutes an effectively infinite supply of truly random one-time pad values of arbitrary length. It is further argued that the CMB power spectrum potentially conforms to the FIPS 140-2 standard. Additionally, its applicability to the generation of a (n × n) random key matrix for a Vernam cipher is established.
Applying the J-optimal channelized quadratic observer to SPECT myocardial perfusion defect detection
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric; Ghaly, Michael; Frey, Eric C.
2016-03-01
To evaluate performance on a perfusion defect detection task from 540 image pairs of myocardial perfusion SPECT image data we apply the J-optimal channelized quadratic observer (J-CQO). We compare AUC values of the linear Hotelling observer and J-CQO when the defect location is fixed and when it occurs in one of two locations. As expected, when the location is fixed a single channels maximizes AUC; location variability requires multiple channels to maximize the AUC. The AUC is estimated from both the projection data and reconstructed images. J-CQO is quadratic since it uses the first- and second- order statistics of the image data from both classes. The linear data reduction by the channels is described by an L x M channel matrix and in prior work we introduced an iterative gradient-based method for calculating the channel matrix. The dimensionality reduction from M measurements to L channels yields better estimates of these sample statistics from smaller sample sizes, and since the channelized covariance matrix is L x L instead of M x M, the matrix inverse is easier to compute. The novelty of our approach is the use of Jeffrey's divergence (J) as the figure of merit (FOM) for optimizing the channel matrix. We previously showed that the J-optimal channels are also the optimum channels for the AUC and the Bhattacharyya distance when the channel outputs are Gaussian distributed with equal means. This work evaluates the use of J as a surrogate FOM (SFOM) for AUC when these statistical conditions are not satisfied.
Parameters of Household Composition as Demographic Measures
ERIC Educational Resources Information Center
Akkerman, Abraham
2005-01-01
Cross-sectional data, such as Census statistics, enable the re-enactment of household lifecourse through the construction of the household composition matrix, a tabulation of persons in households by their age and by the age of their corresponding household-heads. Household lifecourse is represented in the household composition matrix somewhat…
Introducing Statistical Inference to Biology Students through Bootstrapping and Randomization
ERIC Educational Resources Information Center
Lock, Robin H.; Lock, Patti Frazer
2008-01-01
Bootstrap methods and randomization tests are increasingly being used as alternatives to standard statistical procedures in biology. They also serve as an effective introduction to the key ideas of statistical inference in introductory courses for biology students. We discuss the use of such simulation based procedures in an integrated curriculum…
NASA Astrophysics Data System (ADS)
Langley, Robin S.
2018-03-01
This work is concerned with the statistical properties of the frequency response function of the energy of a random system. Earlier studies have considered the statistical distribution of the function at a single frequency, or alternatively the statistics of a band-average of the function. In contrast the present analysis considers the statistical fluctuations over a frequency band, and results are obtained for the mean rate at which the function crosses a specified level (or equivalently, the average number of times the level is crossed within the band). Results are also obtained for the probability of crossing a specified level at least once, the mean rate of occurrence of peaks, and the mean trough-to-peak height. The analysis is based on the assumption that the natural frequencies and mode shapes of the system have statistical properties that are governed by the Gaussian Orthogonal Ensemble (GOE), and the validity of this assumption is demonstrated by comparison with numerical simulations for a random plate. The work has application to the assessment of the performance of dynamic systems that are sensitive to random imperfections.
Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, David O.
2007-01-01
A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less
Money creation process in a random redistribution model
NASA Astrophysics Data System (ADS)
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
An Efficient Voting Algorithm for Finding Additive Biclusters with Random Background
Xiao, Jing; Wang, Lusheng; Liu, Xiaowen
2008-01-01
Abstract The biclustering problem has been extensively studied in many areas, including e-commerce, data mining, machine learning, pattern recognition, statistics, and, more recently, computational biology. Given an n × m matrix A (n ≥ m), the main goal of biclustering is to identify a subset of rows (called objects) and a subset of columns (called properties) such that some objective function that specifies the quality of the found bicluster (formed by the subsets of rows and of columns of A) is optimized. The problem has been proved or conjectured to be NP-hard for various objective functions. In this article, we study a probabilistic model for the implanted additive bicluster problem, where each element in the n × m background matrix is a random integer from [0, L − 1] for some integer L, and a k × k implanted additive bicluster is obtained from an error-free additive bicluster by randomly changing each element to a number in [0, L − 1] with probability θ. We propose an O (n2m) time algorithm based on voting to solve the problem. We show that when \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document}$$k \\geq \\Omega (\\sqrt{n \\log n})$$\\end{document}, the voting algorithm can correctly find the implanted bicluster with probability at least \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document}$$1 - {\\frac {9} {n^ {2}}}$$\\end{document}. We also implement our algorithm as a C++ program named VOTE. The implementation incorporates several ideas for estimating the size of an implanted bicluster, adjusting the threshold in voting, dealing with small biclusters, and dealing with overlapping implanted biclusters. Our experimental results on both simulated and real datasets show that VOTE can find biclusters with a high accuracy and speed. PMID:19040364
Many-Body Quantum Chaos: Analytic Connection to Random Matrix Theory
NASA Astrophysics Data System (ADS)
Kos, Pavel; Ljubotina, Marko; Prosen, Tomaž
2018-04-01
A key goal of quantum chaos is to establish a relationship between widely observed universal spectral fluctuations of clean quantum systems and random matrix theory (RMT). Most prominent features of such RMT behavior with respect to a random spectrum, both encompassed in the spectral pair correlation function, are statistical suppression of small level spacings (correlation hole) and enhanced stiffness of the spectrum at large spectral ranges. For single-particle systems with fully chaotic classical counterparts, the problem has been partly solved by Berry [Proc. R. Soc. A 400, 229 (1985), 10.1098/rspa.1985.0078] within the so-called diagonal approximation of semiclassical periodic-orbit sums, while the derivation of the full RMT spectral form factor K (t ) (Fourier transform of the spectral pair correlation function) from semiclassics has been completed by Müller et al. [Phys. Rev. Lett. 93, 014103 (2004), 10.1103/PhysRevLett.93.014103]. In recent years, the questions of long-time dynamics at high energies, for which the full many-body energy spectrum becomes relevant, are coming to the forefront even for simple many-body quantum systems, such as locally interacting spin chains. Such systems display two universal types of behaviour which are termed the "many-body localized phase" and "ergodic phase." In the ergodic phase, the spectral fluctuations are excellently described by RMT, even for very simple interactions and in the absence of any external source of disorder. Here we provide a clear theoretical explanation for these observations. We compute K (t ) in the leading two orders in t and show its agreement with RMT for nonintegrable, time-reversal invariant many-body systems without classical counterparts, a generic example of which are Ising spin-1 /2 models in a periodically kicking transverse field. In particular, we relate K (t ) to partition functions of a class of twisted classical Ising models on a ring of size t ; hence, the leading-order RMT behavior K (t )≃2 t is a consequence of translation and reflection symmetry of the Ising partition function.
Random density matrices versus random evolution of open system
NASA Astrophysics Data System (ADS)
Pineda, Carlos; Seligman, Thomas H.
2015-10-01
We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.
Free Vibration of Uncertain Unsymmetrically Laminated Beams
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Goyal, Vijay K.
2001-01-01
Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.
Nature of Driving Force for Protein Folding: A Result From Analyzing the Statistical Potential
NASA Astrophysics Data System (ADS)
Li, Hao; Tang, Chao; Wingreen, Ned S.
1997-07-01
In a statistical approach to protein structure analysis, Miyazawa and Jernigan derived a 20×20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the Miyazawa-Jernigan matrix can be accurately reconstructed from its first two principal component vectors as Mij = C0+C1\\(qi+qj\\)+C2qiqj, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.
Inferring monopartite projections of bipartite networks: an entropy-based approach
NASA Astrophysics Data System (ADS)
Saracco, Fabio; Straka, Mika J.; Di Clemente, Riccardo; Gabrielli, Andrea; Caldarelli, Guido; Squartini, Tiziano
2017-05-01
Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p-values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users’ ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.
SparRec: An effective matrix completion framework of missing data imputation for GWAS
NASA Astrophysics Data System (ADS)
Jiang, Bo; Ma, Shiqian; Causey, Jason; Qiao, Linbo; Hardin, Matthew Price; Bitts, Ian; Johnson, Daniel; Zhang, Shuzhong; Huang, Xiuzhen
2016-10-01
Genome-wide association studies present computational challenges for missing data imputation, while the advances of genotype technologies are generating datasets of large sample sizes with sample sets genotyped on multiple SNP chips. We present a new framework SparRec (Sparse Recovery) for imputation, with the following properties: (1) The optimization models of SparRec, based on low-rank and low number of co-clusters of matrices, are different from current statistics methods. While our low-rank matrix completion (LRMC) model is similar to Mendel-Impute, our matrix co-clustering factorization (MCCF) model is completely new. (2) SparRec, as other matrix completion methods, is flexible to be applied to missing data imputation for large meta-analysis with different cohorts genotyped on different sets of SNPs, even when there is no reference panel. This kind of meta-analysis is very challenging for current statistics based methods. (3) SparRec has consistent performance and achieves high recovery accuracy even when the missing data rate is as high as 90%. Compared with Mendel-Impute, our low-rank based method achieves similar accuracy and efficiency, while the co-clustering based method has advantages in running time. The testing results show that SparRec has significant advantages and competitive performance over other state-of-the-art existing statistics methods including Beagle and fastPhase.
A Random Algorithm for Low-Rank Decomposition of Large-Scale Matrices With Missing Entries.
Liu, Yiguang; Lei, Yinjie; Li, Chunguang; Xu, Wenzheng; Pu, Yifei
2015-11-01
A random submatrix method (RSM) is proposed to calculate the low-rank decomposition U(m×r)V(n×r)(T) (r < m, n) of the matrix Y∈R(m×n) (assuming m > n generally) with known entry percentage 0 < ρ ≤ 1. RSM is very fast as only O(mr(2)ρ(r)) or O(n(3)ρ(3r)) floating-point operations (flops) are required, compared favorably with O(mnr+r(2)(m+n)) flops required by the state-of-the-art algorithms. Meanwhile, RSM has the advantage of a small memory requirement as only max(n(2),mr+nr) real values need to be saved. With the assumption that known entries are uniformly distributed in Y, submatrices formed by known entries are randomly selected from Y with statistical size k×nρ(k) or mρ(l)×l , where k or l takes r+1 usually. We propose and prove a theorem, under random noises the probability that the subspace associated with a smaller singular value will turn into the space associated to anyone of the r largest singular values is smaller. Based on the theorem, the nρ(k)-k null vectors or the l-r right singular vectors associated with the minor singular values are calculated for each submatrix. The vectors ought to be the null vectors of the submatrix formed by the chosen nρ(k) or l columns of the ground truth of V(T). If enough submatrices are randomly chosen, V and U can be estimated accordingly. The experimental results on random synthetic matrices with sizes such as 13 1072 ×10(24) and on real data sets such as dinosaur indicate that RSM is 4.30 ∼ 197.95 times faster than the state-of-the-art algorithms. It, meanwhile, has considerable high precision achieving or approximating to the best.
Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices
NASA Astrophysics Data System (ADS)
Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.
2017-08-01
We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.
Li, Jia; Wu, Pinghui; Chang, Liping
2015-08-24
Within the accuracy of the first-order Born approximation, sufficient conditions are derived for the invariance of spectrum of an electromagnetic wave, which is generated by the scattering of an electromagnetic plane wave from an anisotropic random media. We show that the following restrictions on properties of incident fields and the anisotropic media must be simultaneously satisfied: 1) the elements of the dielectric susceptibility matrix of the media must obey the scaling law; 2) the spectral components of the incident field are proportional to each other; 3) the second moments of the elements of the dielectric susceptibility matrix of the media are inversely proportional to the frequency.
Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.
ERIC Educational Resources Information Center
Steinberg, Esther R.; And Others
This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…
NASA Astrophysics Data System (ADS)
Latré, S.; Desplentere, F.; De Pooter, S.; Seveno, D.
2017-10-01
Nanoscale materials showing superior thermal properties have raised the interest of the building industry. By adding these materials to conventional construction materials, it is possible to decrease the total thermal conductivity by almost one order of magnitude. This conductivity is mainly influenced by the dispersion quality within the matrix material. At the industrial scale, the main challenge is to control this dispersion to reduce or even eliminate thermal bridges. This allows to reach an industrially relevant process to balance out the high material cost and their superior thermal insulation properties. Therefore, a methodology is required to measure and describe these nanoscale distributions within the inorganic matrix material. These distributions are either random or normally distributed through thickness within the matrix material. We show that the influence of these distributions is meaningful and modifies the thermal conductivity of the building material. Hence, this strategy will generate a thermal model allowing to predict the thermal behavior of the nanoscale particles and their distributions. This thermal model will be validated by the hot wire technique. For the moment, a good correlation is found between the numerical results and experimental data for a randomly distributed form of nanoparticles in all directions.
Bilici, Suat; Yiğit, Özgür; Dönmez, Zehra; Huq, Gülben Erdem; Aktaş, Şamil
2015-04-01
The aim of the study is to investigate the histopathologic and cartilage mass changes in hyperbaric oxygen (HBO)-treated auricular cartilage grafts either crushed or fascia wrapped in a rabbit model. This is a prospective, controlled experimental study. Sixteen rabbits were randomly allocated into control (n = 8) and treatment groups (n = 8). Each group was further grouped as crushed cartilage (n = 4) and fascia wrapped crushed cartilage (n = 4). The eight rabbits in the treatment group had HBO once daily for 10 days as total of 10 sessions. The mass of cartilage, cartilage edge layout, structural layout, staining disorders of the chondroid matrix, necrosis, calcification besides bone metaplasia, chronic inflammation in the surrounding tissues, fibrosis, and increased vascularity were evaluated in the hematoxylin and eosin (H&E)-stained sections. Fibrosis in the surrounding tissue and cartilage matrix was evaluated with Masson's trichrome stain. The toluidine blue staining was used to evaluate loss of metachromasia in matrix. The prevalence of glial fibrillary acidic protein (GFAP) staining in chondrocytes was also evaluated. Although the remaining amount of cartilage mass after implantation does not show a significant difference between the control and the study group (p = 0.322, p <0.05).The difference between control and study group in terms of positive staining with GFAP was statistically significant (p = 0.01, p <0.05). Necrosis and loss of matrix metachromasia were significantly low in the study group compared with control group (p = 0.001, p = 0.006, p <0.05). HBO therapy did not have significant effect on the mass of rabbit auricular cartilage graft. HBO therapy significantly reduced loss of metachromasia, necrosis, and GFAP staining in the auricular cartilage grafts of the animal model. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Qi, Chang; Changlin, Huang
2007-07-01
To examine the association between levers of cartilage oligomeric matrix protein (COMP), matrix metalloproteinases-1 (MMP-1), matrix metalloproteinases-3 (MMP-3), tissue inhibitor of matrix metalloproteinases-1 (TIMP-1) in serum and synovial fluid, and MR imaging of cartilage degeneration in knee joint, and to understand the effects of movement training with different intensity on cartilage of knee joint. 20 adult canines were randomly divided into three groups (8 in the light training group; 8 in the intensive training group; 4 in the control group), and canines of the two training groups were trained daily at different intensity. The training lasted for 10 weeks in all. Magnetic resonance imaging (MRI) examinations were performed regularly (2, 4, 6, 8, 10 week) to investigate the changes of articular cartilage in the canine knee, while concentrations of COMP, MMP-1, MMP-3, TIMP-1 in serum and synovial fluid were measured by ELISA assays. We could find imaging changes of cartilage degeneration in both the training groups by MRI examination during training period, compared with the control group. However, there was no significant difference between these two training groups. Elevations of levels of COMP, MMP-1, MMP-3, TIMP-1, MMP-3/TIMP-1 were seen in serum and synovial fluid after training, and their levels had obvious association with knee MRI grades of cartilage lesion. Furthermore, there were statistically significant associations between biomarkers levels in serum and in synovial fluid. Long-time and high-intensity movement training induces cartilage degeneration in knee joint. Within the intensity extent applied in this study, knee cartilage degeneration caused by light training or intensive training has no difference in MR imaging, but has a comparatively obvious difference in biomarkers level. To detect articular cartilage degeneration in early stage and monitor pathological process, the associated application of several biomarkers has a very good practical value, and can be used as a helpful supplement to MRI.
Zafiropoulos, Gregor-Georg; John, Gordon
2017-05-01
The aim of this study was to determine the treatment outcome of the use of a porcine monolayer collagen matrix (mCM) to augment peri-implant soft tissue in conjunction with immediate implant placement as an alternative to patient's own connective tissue. A total of 27 implants were placed immediately in 27 patients (14 males and 13 females, with a mean age of 52.2 years) with simultaneous augmentation of the soft tissue by the use of a mCM. The patients were randomly divided into two groups: Group I: An envelope flap was created and mCM was left coronally uncovered, and group II: A coronally repositioned flap was created and the mCM was covered by the mucosa. Soft-tissue thickness (STTh) was measured at the time of surgery (T0) and 6 months postoperatively (T1) using a customized stent. Cone beam computed tomographies (CBCTs) were taken from 12 representative cases at T1. A stringent plaque control regimen was enforced in all the patients during the 6-month observation period. Mean STTh change was similar in both groups (0.7 ± 0.2 and 0.7 ± 0.1 mm in groups I and II respectively). The comparison of STTh between T0 and T1 showed a statistically significant increase of soft tissue in both groups I and II as well as in the total examined population (p < 0.001). The STTh change as well as matrix thickness loss were comparable in both groups (p > 0.05). The evaluation of the CBCTs did not show any signs of resorption of the buccal bone plate. Within the limitations of this study, it could be concluded that the collagen matrix used in conjunction with immediate implant placement leads to an increased thickness of peri-implant soft tissue independent of the flap creation technique and could be an alternative to connective tissue graft. The collagen matrix used seems to be a good alternative to patient's own connective tissue and could be used for the soft tissue augmentation around dental implants.
Comprehensive T-Matrix Reference Database: A 2007-2009 Update
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2010-01-01
The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.
Least-squares analysis of the Mueller matrix.
Reimer, Michael; Yevick, David
2006-08-15
In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.
A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.
Tang, Liansheng Larry; Balakrishnan, N
2011-01-01
The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.
Biclustering of gene expression data using reactive greedy randomized adaptive search procedure.
Dharan, Smitha; Nair, Achuthsankar S
2009-01-30
Biclustering algorithms belong to a distinct class of clustering algorithms that perform simultaneous clustering of both rows and columns of the gene expression matrix and can be a very useful analysis tool when some genes have multiple functions and experimental conditions are diverse. Cheng and Church have introduced a measure called mean squared residue score to evaluate the quality of a bicluster and has become one of the most popular measures to search for biclusters. In this paper, we review basic concepts of the metaheuristics Greedy Randomized Adaptive Search Procedure (GRASP)-construction and local search phases and propose a new method which is a variant of GRASP called Reactive Greedy Randomized Adaptive Search Procedure (Reactive GRASP) to detect significant biclusters from large microarray datasets. The method has two major steps. First, high quality bicluster seeds are generated by means of k-means clustering. In the second step, these seeds are grown using the Reactive GRASP, in which the basic parameter that defines the restrictiveness of the candidate list is self-adjusted, depending on the quality of the solutions found previously. We performed statistical and biological validations of the biclusters obtained and evaluated the method against the results of basic GRASP and as well as with the classic work of Cheng and Church. The experimental results indicate that the Reactive GRASP approach outperforms the basic GRASP algorithm and Cheng and Church approach. The Reactive GRASP approach for the detection of significant biclusters is robust and does not require calibration efforts.
ERIC Educational Resources Information Center
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
Significance Testing in Confirmatory Factor Analytic Models.
ERIC Educational Resources Information Center
Khattab, Ali-Maher; Hocevar, Dennis
Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Fatigue loading history reconstruction based on the rain-flow technique
NASA Technical Reports Server (NTRS)
Khosrovaneh, A. K.; Dowling, N. E.
1989-01-01
Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.
An Analysis of Variance Framework for Matrix Sampling.
ERIC Educational Resources Information Center
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Random matrix approach to group correlations in development country financial market
NASA Astrophysics Data System (ADS)
Qohar, Ulin Nuha Abdul; Lim, Kyuseong; Kim, Soo Yong; Liong, The Houw; Purqon, Acep
2015-12-01
Financial market is a borderless economic activity, everyone in this world has the right to participate in stock transactions. The movement of stocks is interesting to be discussed in various sciences, ranging from economists to mathe-maticians try to explain and predict the stock movement. Econophysics, as a discipline that studies the economic behavior using one of the methods in particle physics to explain stock movement. Stocks tend to be unpredictable probabilistic regarded as a probabilistic particle. Random Matrix Theory is one method used to analyze probabilistic particle is used to analyze the characteristics of the movement in the stock collection of developing country stock market shares of the correlation matrix. To obtain the characteristics of the developing country stock market and use characteristics of stock markets of developed countries as a parameter for comparison. The result shows market wide effect is not happened in Philipine market and weak in Indonesia market. Contrary, developed country (US) has strong market wide effect.
Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.
2014-01-01
The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.
Coherent Patterns in Nuclei and in Financial Markets
NASA Astrophysics Data System (ADS)
DroŻdŻ, S.; Kwapień, J.; Speth, J.
2010-07-01
In the area of traditional physics the atomic nucleus belongs to the most complex systems. It involves essentially all elements that characterize complexity including the most distinctive one whose essence is a permanent coexistence of coherent patterns and of randomness. From a more interdisciplinary perspective, these are the financial markets that represent an extreme complexity. Here, based on the matrix formalism, we set some parallels between several characteristics of complexity in the above two systems. We, in particular, refer to the concept—historically originating from nuclear physics considerations—of the random matrix theory and demonstrate its utility in quantifying characteristics of the coexistence of chaos and collectivity also for the financial markets. In this later case we show examples that illustrate mapping of the matrix formulation into the concepts originating from the graph theory. Finally, attention is drawn to some novel aspects of the financial coherence which opens room for speculation if analogous effects can be detected in the atomic nuclei or in other strongly interacting Fermi systems.
The invariant statistical rule of aerosol scattering pulse signal modulated by random noise
NASA Astrophysics Data System (ADS)
Yan, Zhen-gang; Bian, Bao-Min; Yang, Juan; Peng, Gang; Li, Zhen-hua
2010-11-01
A model of the random background noise acting on particle signals is established to study the impact of the background noise of the photoelectric sensor in the laser airborne particle counter on the statistical character of the aerosol scattering pulse signals. The results show that the noises broaden the statistical distribution of the particle's measurement. Further numerical research shows that the output of the signal amplitude still has the same distribution when the airborne particle with the lognormal distribution was modulated by random noise which has lognormal distribution. Namely it follows the statistics law of invariance. Based on this model, the background noise of photoelectric sensor and the counting distributions of random signal for aerosol's scattering pulse are obtained and analyzed by using a high-speed data acquisition card PCI-9812. It is found that the experiment results and simulation results are well consistent.
Detecting most influencing courses on students grades using block PCA
NASA Astrophysics Data System (ADS)
Othman, Osama H.; Gebril, Rami Salah
2014-12-01
One of the modern solutions adopted in dealing with the problem of large number of variables in statistical analyses is the Block Principal Component Analysis (Block PCA). This modified technique can be used to reduce the vertical dimension (variables) of the data matrix Xn×p by selecting a smaller number of variables, (say m) containing most of the statistical information. These selected variables can then be employed in further investigations and analyses. Block PCA is an adapted multistage technique of the original PCA. It involves the application of Cluster Analysis (CA) and variable selection throughout sub principal components scores (PC's). The application of Block PCA in this paper is a modified version of the original work of Liu et al (2002). The main objective was to apply PCA on each group of variables, (established using cluster analysis), instead of involving the whole large pack of variables which was proved to be unreliable. In this work, the Block PCA is used to reduce the size of a huge data matrix ((n = 41) × (p = 251)) consisting of Grade Point Average (GPA) of the students in 251 courses (variables) in the faculty of science in Benghazi University. In other words, we are constructing a smaller analytical data matrix of the GPA's of the students with less variables containing most variation (statistical information) in the original database. By applying the Block PCA, (12) courses were found to `absorb' most of the variation or influence from the original data matrix, and hence worth to be keep for future statistical exploring and analytical studies. In addition, the course Independent Study (Math.) was found to be the most influencing course on students GPA among the 12 selected courses.
Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao
2017-04-01
Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.
Quadeer, Ahmed A.; Louie, Raymond H. Y.; Shekhar, Karthik; Chakraborty, Arup K.; Hsing, I-Ming
2014-01-01
ABSTRACT Chronic hepatitis C virus (HCV) infection is one of the leading causes of liver failure and liver cancer, affecting around 3% of the world's population. The extreme sequence variability of the virus resulting from error-prone replication has thwarted the discovery of a universal prophylactic vaccine. It is known that vigorous and multispecific cellular immune responses, involving both helper CD4+ and cytotoxic CD8+ T cells, are associated with the spontaneous clearance of acute HCV infection. Escape mutations in viral epitopes can, however, abrogate protective T-cell responses, leading to viral persistence and associated pathologies. Despite the propensity of the virus to mutate, there might still exist substitutions that incur a fitness cost. In this paper, we identify groups of coevolving residues within HCV nonstructural protein 3 (NS3) by analyzing diverse sequences of this protein using ideas from random matrix theory and associated methods. Our analyses indicate that one of these groups comprises a large percentage of residues for which HCV appears to resist multiple simultaneous substitutions. Targeting multiple residues in this group through vaccine-induced immune responses should either lead to viral recognition or elicit escape substitutions that compromise viral fitness. Our predictions are supported by published clinical data, which suggested that immune genotypes associated with spontaneous clearance of HCV preferentially recognized and targeted this vulnerable group of residues. Moreover, mapping the sites of this group onto the available protein structure provided insight into its functional significance. An epitope-based immunogen is proposed as an alternative to the NS3 epitopes in the peptide-based vaccine IC41. IMPORTANCE Despite much experimental work on HCV, a thorough statistical study of the HCV sequences for the purpose of immunogen design was missing in the literature. Such a study is vital to identify epistatic couplings among residues that can provide useful insights for designing a potent vaccine. In this work, ideas from random matrix theory were applied to characterize the statistics of substitutions within the diverse publicly available sequences of the genotype 1a HCV NS3 protein, leading to a group of sites for which HCV appears to resist simultaneous substitutions possibly due to deleterious effect on viral fitness. Our analysis leads to completely novel immunogen designs for HCV. In addition, the NS3 epitopes used in the recently proposed peptide-based vaccine IC41 were analyzed in the context of our framework. Our analysis predicts that alternative NS3 epitopes may be worth exploring as they might be more efficacious. PMID:24760894
Tomasik, Andrzej; Jacheć, Wojciech; Wojciechowska, Celina; Kawecki, Damian; Białkowska, Beata; Romuk, Ewa; Gabrysiak, Artur; Birkner, Ewa; Kalarus, Zbigniew; Nowalany-Kozielska, Ewa
2015-05-01
Dual chamber pacing is known to have detrimental effect on cardiac performance and heart failure occurring eventually is associated with increased mortality. Experimental studies of pacing in dogs have shown contractile dyssynchrony leading to diffuse alterations in extracellular matrix. In parallel, studies on experimental ischemia/reperfusion injury have shown efficacy of valsartan to inhibit activity of matrix metalloproteinase-9, to increase the activity of tissue inhibitor of matrix metalloproteinase-3 and preserve global contractility and left ventricle ejection fraction. To present rationale and design of randomized blinded trial aimed to assess whether 12 month long administration of valsartan will prevent left ventricle remodeling in patients with preserved left ventricle ejection fraction (LVEF ≥ 40%) and first implantation of dual chamber pacemaker. A total of 100 eligible patients will be randomized into three parallel arms: placebo, valsartan 80 mg/daily and valsartan 160 mg/daily added to previously used drugs. The primary endpoint will be assessment of valsartan efficacy to prevent left ventricle remodeling during 12 month follow-up. We assess patients' functional capacity, blood plasma activity of matrix metalloproteinases and their tissue inhibitors, NT-proBNP, tumor necrosis factor alpha, and Troponin T. Left ventricle function and remodeling is assessed echocardiographically: M-mode, B-mode, tissue Doppler imaging. If valsartan proves effective, it will be an attractive measure to improve long term prognosis in aging population and increasing number of pacemaker recipients. ClinicalTrials.org (NCT01805804). Copyright © 2015 Elsevier Inc. All rights reserved.
Sangiorgio, João Paulo Menck; Neves, Felipe Lucas da Silva; Rocha Dos Santos, Manuela; França-Grohmann, Isabela Lima; Casarin, Renato Corrêa Viana; Casati, Márcio Zaffalon; Santamaria, Mauro Pedrine; Sallum, Enilson Antonio
2017-12-01
Considering xenogeneic collagen matrix (CM) and enamel matrix derivative (EMD) characteristics, it is suggested that their combination could promote superior clinical outcomes in root coverage procedures. Thus, the aim of this parallel, double-masked, dual-center, randomized clinical trial is to evaluate clinical outcomes after treatment of localized gingival recession (GR) by a coronally advanced flap (CAF) combined with CM and/or EMD. Sixty-eight patients presenting one Miller Class I or II GRs were randomly assigned to receive either CAF (n = 17); CAF + CM (n = 17); CAF + EMD (n = 17), or CAF + CM + EMD (n = 17). Recession height, probing depth, clinical attachment level, and keratinized tissue width and thickness were measured at baseline and 90 days and 6 months after surgery. The obtained root coverage was 68.04% ± 24.11% for CAF; 87.20% ± 15.01% for CAF + CM; 88.77% ± 20.66% for CAF + EMD; and 91.59% ± 11.08% for CAF + CM + EMD after 6 months. Groups that received biomaterials showed greater values (P <0.05). Complete root coverage (CRC) for CAF + EMD was 70.59%, significantly superior to CAF alone (23.53%); CAF + CM (52.94%), and CAF + CM + EMD (51.47%) (P <0.05). Keratinized tissue thickness gain was significant only in CM-treated groups (P <0.05). The three approaches are superior to CAF alone for root coverage. EMD provides highest levels of CRC; however, the addition of CM increases gingival thickness. The combination approach does not seem justified.
Covariance Matrix Estimation for Massive MIMO
NASA Astrophysics Data System (ADS)
Upadhya, Karthik; Vorobyov, Sergiy A.
2018-04-01
We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.
Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Partridge, Harry
1987-01-01
Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.
Development and Validation of a Job Exposure Matrix for Physical Risk Factors in Low Back Pain
Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira
2012-01-01
Objectives The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). Materials and Methods We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. Results The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. Conclusions The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology. PMID:23152793
NASA Astrophysics Data System (ADS)
Trifoniuk, L. I.; Ushenko, Yu. A.; Sidor, M. I.; Minzer, O. P.; Gritsyuk, M. V.; Novakovskaya, O. Y.
2014-08-01
The work consists of investigation results of diagnostic efficiency of a new azimuthally stable Mueller-matrix method of analysis of laser autofluorescence coordinate distributions of biological tissues histological sections. A new model of generalized optical anisotropy of biological tissues protein networks is proposed in order to define the processes of laser autofluorescence. The influence of complex mechanisms of both phase anisotropy (linear birefringence and optical activity) and linear (circular) dichroism is taken into account. The interconnections between the azimuthally stable Mueller-matrix elements characterizing laser autofluorescence and different mechanisms of optical anisotropy are determined. The statistic analysis of coordinate distributions of such Mueller-matrix rotation invariants is proposed. Thereupon the quantitative criteria (statistic moments of the 1st to the 4th order) of differentiation of histological sections of uterus wall tumor - group 1 (dysplasia) and group 2 (adenocarcinoma) are estimated.