Sample records for random matrix analysis

  1. Chaos and random matrices in supersymmetric SYK

    NASA Astrophysics Data System (ADS)

    Hunter-Jones, Nicholas; Liu, Junyu

    2018-05-01

    We use random matrix theory to explore late-time chaos in supersymmetric quantum mechanical systems. Motivated by the recent study of supersymmetric SYK models and their random matrix classification, we consider the Wishart-Laguerre unitary ensemble and compute the spectral form factors and frame potentials to quantify chaos and randomness. Compared to the Gaussian ensembles, we observe the absence of a dip regime in the form factor and a slower approach to Haar-random dynamics. We find agreement between our random matrix analysis and predictions from the supersymmetric SYK model, and discuss the implications for supersymmetric chaotic systems.

  2. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-12-01

    In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  3. Social patterns revealed through random matrix theory

    NASA Astrophysics Data System (ADS)

    Sarkar, Camellia; Jalan, Sarika

    2014-11-01

    Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.

  4. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  5. Gravitational lensing by eigenvalue distributions of random matrix models

    NASA Astrophysics Data System (ADS)

    Martínez Alonso, Luis; Medina, Elena

    2018-05-01

    We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

  6. A generalization of random matrix theory and its application to statistical physics.

    PubMed

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  7. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  8. Finite-time stability of neutral-type neural networks with random time-varying delays

    NASA Astrophysics Data System (ADS)

    Ali, M. Syed; Saravanan, S.; Zhu, Quanxin

    2017-11-01

    This paper is devoted to the finite-time stability analysis of neutral-type neural networks with random time-varying delays. The randomly time-varying delays are characterised by Bernoulli stochastic variable. This result can be extended to analysis and design for neutral-type neural networks with random time-varying delays. On the basis of this paper, we constructed suitable Lyapunov-Krasovskii functional together and established a set of sufficient linear matrix inequalities approach to guarantee the finite-time stability of the system concerned. By employing the Jensen's inequality, free-weighting matrix method and Wirtinger's double integral inequality, the proposed conditions are derived and two numerical examples are addressed for the effectiveness of the developed techniques.

  9. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  10. Statistical analysis of effective singular values in matrix rank determination

    NASA Technical Reports Server (NTRS)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  11. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    NASA Astrophysics Data System (ADS)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  12. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  13. Random matrix theory and portfolio optimization in Moroccan stock exchange

    NASA Astrophysics Data System (ADS)

    El Alaoui, Marwane

    2015-09-01

    In this work, we use random matrix theory to analyze eigenvalues and see if there is a presence of pertinent information by using Marčenko-Pastur distribution. Thus, we study cross-correlation among stocks of Casablanca Stock Exchange. Moreover, we clean correlation matrix from noisy elements to see if the gap between predicted risk and realized risk would be reduced. We also analyze eigenvectors components distributions and their degree of deviations by computing the inverse participation ratio. This analysis is a way to understand the correlation structure among stocks of Casablanca Stock Exchange portfolio.

  14. Market Correlation Structure Changes Around the Great Crash: A Random Matrix Theory Analysis of the Chinese Stock Market

    NASA Astrophysics Data System (ADS)

    Han, Rui-Qi; Xie, Wen-Jie; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing

    The correlation structure of a stock market contains important financial contents, which may change remarkably due to the occurrence of financial crisis. We perform a comparative analysis of the Chinese stock market around the occurrence of the 2008 crisis based on the random matrix analysis of high-frequency stock returns of 1228 Chinese stocks. Both raw correlation matrix and partial correlation matrix with respect to the market index in two time periods of one year are investigated. We find that the Chinese stocks have stronger average correlation and partial correlation in 2008 than in 2007 and the average partial correlation is significantly weaker than the average correlation in each period. Accordingly, the largest eigenvalue of the correlation matrix is remarkably greater than that of the partial correlation matrix in each period. Moreover, each largest eigenvalue and its eigenvector reflect an evident market effect, while other deviating eigenvalues do not. We find no evidence that deviating eigenvalues contain industrial sectorial information. Surprisingly, the eigenvectors of the second largest eigenvalues in 2007 and of the third largest eigenvalues in 2008 are able to distinguish the stocks from the two exchanges. We also find that the component magnitudes of the some largest eigenvectors are proportional to the stocks’ capitalizations.

  15. Random Matrix Theory in molecular dynamics analysis.

    PubMed

    Palese, Luigi Leonardo

    2015-01-01

    It is well known that, in some situations, principal component analysis (PCA) carried out on molecular dynamics data results in the appearance of cosine-shaped low index projections. Because this is reminiscent of the results obtained by performing PCA on a multidimensional Brownian dynamics, it has been suggested that short-time protein dynamics is essentially nothing more than a noisy signal. Here we use Random Matrix Theory to analyze a series of short-time molecular dynamics experiments which are specifically designed to be simulations with high cosine content. We use as a model system the protein apoCox17, a mitochondrial copper chaperone. Spectral analysis on correlation matrices allows to easily differentiate random correlations, simply deriving from the finite length of the process, from non-random signals reflecting the intrinsic system properties. Our results clearly show that protein dynamics is not really Brownian also in presence of the cosine-shaped low index projections on principal axes. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Quasiparticle random phase approximation uncertainties and their correlations in the analysis of 0{nu}{beta}{beta} decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faessler, Amand; Rodin, V.; Fogli, G. L.

    2009-03-01

    The variances and covariances associated to the nuclear matrix elements of neutrinoless double beta decay (0{nu}{beta}{beta}) are estimated within the quasiparticle random phase approximation. It is shown that correlated nuclear matrix elements uncertainties play an important role in the comparison of 0{nu}{beta}{beta} decay rates for different nuclei, and that they are degenerate with the uncertainty in the reconstructed Majorana neutrino mass.

  17. Data-driven probability concentration and sampling on manifold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less

  18. Bi-dimensional null model analysis of presence-absence binary matrices.

    PubMed

    Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J

    2018-01-01

    Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  19. Universal shocks in the Wishart random-matrix ensemble.

    PubMed

    Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr

    2013-05-01

    We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.

  20. Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective

    NASA Astrophysics Data System (ADS)

    Jamali, Tayeb; Jafari, G. R.

    2015-07-01

    We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.

  1. Semistochastic approach to many electron systems

    NASA Astrophysics Data System (ADS)

    Grossjean, M. K.; Grossjean, M. F.; Schulten, K.; Tavan, P.

    1992-08-01

    A Pariser-Parr-Pople (PPP) Hamiltonian of an 8π electron system of the molecule octatetraene, represented in a configuration-interaction basis (CI basis), is analyzed with respect to the statistical properties of its matrix elements. Based on this analysis we develop an effective Hamiltonian, which represents virtual excitations by a Gaussian orthogonal ensemble (GOE). We also examine numerical approaches which replace the original Hamiltonian by a semistochastically generated CI matrix. In that CI matrix, the matrix elements of high energy excitations are choosen randomly according to distributions reflecting the statistics of the original CI matrix.

  2. Conditional random matrix ensembles and the stability of dynamical systems

    NASA Astrophysics Data System (ADS)

    Kirk, Paul; Rolando, Delphine M. Y.; MacLean, Adam L.; Stumpf, Michael P. H.

    2015-08-01

    Random matrix theory (RMT) has found applications throughout physics and applied mathematics, in subject areas as diverse as communications networks, population dynamics, neuroscience, and models of the banking system. Many of these analyses exploit elegant analytical results, particularly the circular law and its extensions. In order to apply these results, assumptions must be made about the distribution of matrix elements. Here we demonstrate that the choice of matrix distribution is crucial. In particular, adopting an unrealistic matrix distribution for the sake of analytical tractability is liable to lead to misleading conclusions. We focus on the application of RMT to the long-standing, and at times fractious, ‘diversity-stability debate’, which is concerned with establishing whether large complex systems are likely to be stable. Early work (and subsequent elaborations) brought RMT to bear on the debate by modelling the entries of a system’s Jacobian matrix as independent and identically distributed (i.i.d.) random variables. These analyses were successful in yielding general results that were not tied to any specific system, but relied upon a restrictive i.i.d. assumption. Other studies took an opposing approach, seeking to elucidate general principles of stability through the analysis of specific systems. Here we develop a statistical framework that reconciles these two contrasting approaches. We use a range of illustrative dynamical systems examples to demonstrate that: (i) stability probability cannot be summarily deduced from any single property of the system (e.g. its diversity); and (ii) our assessment of stability depends on adequately capturing the details of the systems analysed. Failing to condition on the structure of dynamical systems will skew our analysis and can, even for very small systems, result in an unnecessarily pessimistic diagnosis of their stability.

  3. A time-series approach to dynamical systems from classical and quantum worlds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fossion, Ruben

    2014-01-08

    This contribution discusses some recent applications of time-series analysis in Random Matrix Theory (RMT), and applications of RMT in the statistial analysis of eigenspectra of correlation matrices of multivariate time series.

  4. The fast algorithm of spark in compressive sensing

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.

  5. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    ERIC Educational Resources Information Center

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  6. Correlation and volatility in an Indian stock market: A random matrix approach

    NASA Astrophysics Data System (ADS)

    Kulkarni, Varsha; Deo, Nivedita

    2007-11-01

    We examine the volatility of an Indian stock market in terms of correlation of stocks and quantify the volatility using the random matrix approach. First we discuss trends observed in the pattern of stock prices in the Bombay Stock Exchange for the three-year period 2000 2002. Random matrix analysis is then applied to study the relationship between the coupling of stocks and volatility. The study uses daily returns of 70 stocks for successive time windows of length 85 days for the year 2001. We compare the properties of matrix C of correlations between price fluctuations in time regimes characterized by different volatilities. Our analyses reveal that (i) the largest (deviating) eigenvalue of C correlates highly with the volatility of the index, (ii) there is a shift in the distribution of the components of the eigenvector corresponding to the largest eigenvalue across regimes of different volatilities, (iii) the inverse participation ratio for this eigenvector anti-correlates significantly with the market fluctuations and finally, (iv) this eigenvector of C can be used to set up a Correlation Index, CI whose temporal evolution is significantly correlated with the volatility of the overall market index.

  7. Correlation analysis of the Korean stock market: Revisited to consider the influence of foreign exchange rate

    NASA Astrophysics Data System (ADS)

    Jo, Sang Kyun; Kim, Min Jae; Lim, Kyuseong; Kim, Soo Yong

    2018-02-01

    We investigated the effect of foreign exchange rate in a correlation analysis of the Korean stock market using both random matrix theory and minimum spanning tree. We collected data sets which were divided into two types of stock price, the original stock price in Korean Won and the price converted into US dollars at contemporary foreign exchange rates. Comparing the random matrix theory based on the two different prices, a few particular sectors exhibited substantial differences while other sectors changed little. The particular sectors were closely related to economic circumstances and the influence of foreign financial markets during that period. The method introduced in this paper offers a way to pinpoint the effect of exchange rate on an emerging stock market.

  8. A matrix-based method of moments for fitting the multivariate random effects model for meta-analysis and meta-regression

    PubMed Central

    Jackson, Dan; White, Ian R; Riley, Richard D

    2013-01-01

    Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213

  9. Google matrix analysis of directed networks

    NASA Astrophysics Data System (ADS)

    Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.

    2015-10-01

    In the past decade modern societies have developed enormous communication and social networks. Their classification and information retrieval processing has become a formidable task for the society. Because of the rapid growth of the World Wide Web, and social and communication networks, new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way. Various search engines extensively use such methods. It is highly important to develop new tools to classify and rank a massive amount of network information in a way that is adapted to internal network structures and characteristics. This review describes the Google matrix analysis of directed complex networks demonstrating its efficiency using various examples including the World Wide Web, Wikipedia, software architectures, world trade, social and citation networks, brain neural networks, DNA sequences, and Ulam networks. The analytical and numerical matrix methods used in this analysis originate from the fields of Markov chains, quantum chaos, and random matrix theory.

  10. Random matrix theory and fund of funds portfolio optimisation

    NASA Astrophysics Data System (ADS)

    Conlon, T.; Ruskin, H. J.; Crane, M.

    2007-08-01

    The proprietary nature of Hedge Fund investing means that it is common practise for managers to release minimal information about their returns. The construction of a fund of hedge funds portfolio requires a correlation matrix which often has to be estimated using a relatively small sample of monthly returns data which induces noise. In this paper, random matrix theory (RMT) is applied to a cross-correlation matrix C, constructed using hedge fund returns data. The analysis reveals a number of eigenvalues that deviate from the spectrum suggested by RMT. The components of the deviating eigenvectors are found to correspond to distinct groups of strategies that are applied by hedge fund managers. The inverse participation ratio is used to quantify the number of components that participate in each eigenvector. Finally, the correlation matrix is cleaned by separating the noisy part from the non-noisy part of C. This technique is found to greatly reduce the difference between the predicted and realised risk of a portfolio, leading to an improved risk profile for a fund of hedge funds.

  11. RSAT: regulatory sequence analysis tools.

    PubMed

    Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques

    2008-07-01

    The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.

  12. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  13. Removal of Stationary Sinusoidal Noise from Random Vibration Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian; Cap, Jerome S.

    In random vibration environments, sinusoidal line noise may appear in the vibration signal and can affect analysis of the resulting data. We studied two methods which remove stationary sine tones from random noise: a matrix inversion algorithm and a chirp-z transform algorithm. In addition, we developed new methods to determine the frequency of the tonal noise. The results show that both of the removal methods can eliminate sine tones in prefabricated random vibration data when the sine-to-random ratio is at least 0.25. For smaller ratios down to 0.02 only the matrix inversion technique can remove the tones, but the metricsmore » to evaluate its effectiveness also degrade. We also found that using fast Fourier transforms best identified the tonal noise, and determined that band-pass-filtering the signals prior to the process improved sine removal. When applied to actual vibration test data, the methods were not as effective at removing harmonic tones, which we believe to be a result of mixed-phase sinusoidal noise.« less

  14. Statistical properties of the stock and credit market: RMT and network topology

    NASA Astrophysics Data System (ADS)

    Lim, Kyuseong; Kim, Min Jae; Kim, Sehyun; Kim, Soo Yong

    We analyzed the dependence structure of the credit and stock market using random matrix theory and network topology. The dynamics of both markets have been spotlighted throughout the subprime crisis. In this study, we compared these two markets in view of the market-wide effect from random matrix theory and eigenvalue analysis. We found that the largest eigenvalue of the credit market as a whole preceded that of the stock market in the beginning of the financial crisis and that of two markets tended to be synchronized after the crisis. The correlation between the companies of both markets became considerably stronger after the crisis as well.

  15. An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions

    ERIC Educational Resources Information Center

    Radhakrishnan, R.; Choudhury, Askar

    2009-01-01

    Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…

  16. On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models

    NASA Astrophysics Data System (ADS)

    Khorunzhiy, O.

    2008-08-01

    Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.

  17. Least-squares analysis of the Mueller matrix.

    PubMed

    Reimer, Michael; Yevick, David

    2006-08-15

    In a single-mode fiber excited by light with a fixed polarization state, the output polarizations obtained at two different optical frequencies are related by a Mueller matrix. We examine least-squares procedures for estimating this matrix from repeated measurements of the output Stokes vector for a random set of input polarization states. We then apply these methods to the determination of polarization mode dispersion and polarization-dependent loss in an optical fiber. We find that a relatively simple formalism leads to results that are comparable with those of far more involved techniques.

  18. Discovering cell types in flow cytometry data with random matrix theory

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Nussenblatt, Robert; Losert, Wolfgang

    Flow cytometry is a widely used experimental technique in immunology research. During the experiments, peripheral blood mononuclear cells (PBMC) from a single patient, labeled with multiple fluorescent stains that bind to different proteins, are illuminated by a laser. The intensity of each stain on a single cell is recorded and reflects the amount of protein expressed by that cell. The data analysis focuses on identifying specific cell types related to a disease. Different cell types can be identified by the type and amount of protein they express. To date, this has most often been done manually by labelling a protein as expressed or not while ignoring the amount of expression. Using a cross correlation matrix of stain intensities, which contains both information on the proteins expressed and their amount, has been largely ignored by researchers as it suffers from measurement noise. Here we present an algorithm to identify cell types in flow cytometry data which uses random matrix theory (RMT) to reduce noise in a cross correlation matrix. We demonstrate our method using a published flow cytometry data set. Compared with previous analysis techniques, we were able to rediscover relevant cell types in an automatic way. Department of Physics, University of Maryland, College Park, MD 20742.

  19. Temporal evolution of financial-market correlations.

    PubMed

    Fenn, Daniel J; Porter, Mason A; Williams, Stacy; McDonald, Mark; Johnson, Neil F; Jones, Nick S

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  20. Temporal evolution of financial-market correlations

    NASA Astrophysics Data System (ADS)

    Fenn, Daniel J.; Porter, Mason A.; Williams, Stacy; McDonald, Mark; Johnson, Neil F.; Jones, Nick S.

    2011-08-01

    We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007-2008 credit and liquidity crisis.

  1. Statistics of time delay and scattering correlation functions in chaotic systems. I. Random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel

    2015-06-15

    We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.

  2. Carbon nanotubes within polymer matrix can synergistically enhance mechanical energy dissipation

    NASA Astrophysics Data System (ADS)

    Ashraf, Taimoor; Ranaiefar, Meelad; Khatri, Sumit; Kavosi, Jamshid; Gardea, Frank; Glaz, Bryan; Naraghi, Mohammad

    2018-03-01

    Safe operation and health of structures relies on their ability to effectively dissipate undesired vibrations, which could otherwise significantly reduce the life-time of a structure due to fatigue loads or large deformations. To address this issue, nanoscale fillers, such as carbon nanotubes (CNTs), have been utilized to dissipate mechanical energy in polymer-based nanocomposites through filler-matrix interfacial friction by benefitting from their large interface area with the matrix. In this manuscript, for the first time, we experimentally investigate the effect of CNT alignment with respect to reach other and their orientation with respect to the loading direction on vibrational damping in nanocomposites. The matrix was polystyrene (PS). A new technique was developed to fabricate PS-CNT nanocomposites which allows for controlling the angle of CNTs with respect to the far-field loading direction (misalignment angle). Samples were subjected to dynamic mechanical analysis, and the damping of the samples were measured as the ratio of the loss to storage moduli versus CNT misalignment angle. Our results defied a notion that randomly oriented CNT nanocomposites can be approximated as a combination of matrix-CNT representative volume elements with randomly aligned CNTs. Instead, our results points to major contributions of stress concentration induced by each CNT in the matrix in proximity of other CNTs on vibrational damping. The stress fields around CNTs in PS-CNT nanocomposites were studied via finite element analysis. Our findings provide significant new insights not only on vibrational damping nanocomposites, but also on their failure modes and toughness, in relation to interface phenomena.

  3. Random matrix ensembles for many-body quantum systems

    NASA Astrophysics Data System (ADS)

    Vyas, Manan; Seligman, Thomas H.

    2018-04-01

    Classical random matrix ensembles were originally introduced in physics to approximate quantum many-particle nuclear interactions. However, there exists a plethora of quantum systems whose dynamics is explained in terms of few-particle (predom-inantly two-particle) interactions. The random matrix models incorporating the few-particle nature of interactions are known as embedded random matrix ensembles. In the present paper, we provide a brief overview of these two ensembles and illustrate how the embedded ensembles can be successfully used to study decoherence of a qubit interacting with an environment, both for fermionic and bosonic embedded ensembles. Numerical calculations show the dependence of decoherence on the nature of the environment.

  4. Measuring order in disordered systems and disorder in ordered systems: Random matrix theory for isotropic and nematic liquid crystals and its perspective on pseudo-nematic domains

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Stratt, Richard M.

    2018-05-01

    Surprisingly long-ranged intermolecular correlations begin to appear in isotropic (orientationally disordered) phases of liquid crystal forming molecules when the temperature or density starts to close in on the boundary with the nematic (ordered) phase. Indeed, the presence of slowly relaxing, strongly orientationally correlated, sets of molecules under putatively disordered conditions ("pseudo-nematic domains") has been apparent for some time from light-scattering and optical-Kerr experiments. Still, a fully microscopic characterization of these domains has been lacking. We illustrate in this paper how pseudo-nematic domains can be studied in even relatively small computer simulations by looking for order-parameter tensor fluctuations much larger than one would expect from random matrix theory. To develop this idea, we show that random matrix theory offers an exact description of how the probability distribution for liquid-crystal order parameter tensors converges to its macroscopic-system limit. We then illustrate how domain properties can be inferred from finite-size-induced deviations from these random matrix predictions. A straightforward generalization of time-independent random matrix theory also allows us to prove that the analogous random matrix predictions for the time dependence of the order-parameter tensor are similarly exact in the macroscopic limit, and that relaxation behavior of the domains can be seen in the breakdown of the finite-size scaling required by that random-matrix theory.

  5. Random Matrix Theory and Econophysics

    NASA Astrophysics Data System (ADS)

    Rosenow, Bernd

    2000-03-01

    Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory Analysis of Diffusion in Stock Price Dynamics, preprint

  6. On the Extraction of Components and the Applicability of the Factor Model.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Harris, Chester W.

    A reanalysis of Shaycroft's matrix of intercorrelations of 10 test variables plus 4 random variables is discussed. Three different procedures were used in the reanalysis: (1) Image Component Analysis, (2) Uniqueness Rescaling Factor Analysis, and (3) Alpha Factor Analysis. The results of these analyses are presented in tables. It is concluded from…

  7. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  8. The effect of stochiastic technique on estimates of population viability from transition matrix models

    USGS Publications Warehouse

    Kaye, T.N.; Pyke, David A.

    2003-01-01

    Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.

  9. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  10. Instability of Hierarchical Cluster Analysis Due to Input Order of the Data: The PermuCLUSTER Solution

    ERIC Educational Resources Information Center

    van der Kloot, Willem A.; Spaans, Alexander M. J.; Heiser, Willem J.

    2005-01-01

    Hierarchical agglomerative cluster analysis (HACA) may yield different solutions under permutations of the input order of the data. This instability is caused by ties, either in the initial proximity matrix or arising during agglomeration. The authors recommend to repeat the analysis on a large number of random permutations of the rows and columns…

  11. Staggered chiral random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, James C.

    2011-02-01

    We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.

  12. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors

    NASA Astrophysics Data System (ADS)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  13. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    PubMed

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  14. Principal regression analysis and the index leverage effect

    NASA Astrophysics Data System (ADS)

    Reigneron, Pierre-Alain; Allez, Romain; Bouchaud, Jean-Philippe

    2011-09-01

    We revisit the index leverage effect, that can be decomposed into a volatility effect and a correlation effect. We investigate the latter using a matrix regression analysis, that we call ‘Principal Regression Analysis' (PRA) and for which we provide some analytical (using Random Matrix Theory) and numerical benchmarks. We find that downward index trends increase the average correlation between stocks (as measured by the most negative eigenvalue of the conditional correlation matrix), and makes the market mode more uniform. Upward trends, on the other hand, also increase the average correlation between stocks but rotates the corresponding market mode away from uniformity. There are two time scales associated to these effects, a short one on the order of a month (20 trading days), and a longer time scale on the order of a year. We also find indications of a leverage effect for sectorial correlations as well, which reveals itself in the second and third mode of the PRA.

  15. Random matrix approach to cross correlations in financial data

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  16. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  17. Random matrix theory filters in portfolio optimisation: A stability and risk assessment

    NASA Astrophysics Data System (ADS)

    Daly, J.; Crane, M.; Ruskin, H. J.

    2008-07-01

    Random matrix theory (RMT) filters, applied to covariance matrices of financial returns, have recently been shown to offer improvements to the optimisation of stock portfolios. This paper studies the effect of three RMT filters on the realised portfolio risk, and on the stability of the filtered covariance matrix, using bootstrap analysis and out-of-sample testing. We propose an extension to an existing RMT filter, (based on Krzanowski stability), which is observed to reduce risk and increase stability, when compared to other RMT filters tested. We also study a scheme for filtering the covariance matrix directly, as opposed to the standard method of filtering correlation, where the latter is found to lower the realised risk, on average, by up to 6.7%. We consider both equally and exponentially weighted covariance matrices in our analysis, and observe that the overall best method out-of-sample was that of the exponentially weighted covariance, with our Krzanowski stability-based filter applied to the correlation matrix. We also find that the optimal out-of-sample decay factors, for both filtered and unfiltered forecasts, were higher than those suggested by Riskmetrics [J.P. Morgan, Reuters, Riskmetrics technical document, Technical Report, 1996. http://www.riskmetrics.com/techdoc.html], with those for the latter approaching a value of α=1. In conclusion, RMT filtering reduced the realised risk, on average, and in the majority of cases when tested out-of-sample, but increased the realised risk on a marked number of individual days-in some cases more than doubling it.

  18. Risk analytics for hedge funds

    NASA Astrophysics Data System (ADS)

    Pareek, Ankur

    2005-05-01

    The rapid growth of the hedge fund industry presents significant business opportunity for the institutional investors particularly in the form of portfolio diversification. To facilitate this, there is a need to develop a new set of risk analytics for investments consisting of hedge funds, with the ultimate aim to create transparency in risk measurement without compromising the proprietary investment strategies of hedge funds. As well documented in the literature, use of dynamic options like strategies by most of the hedge funds make their returns highly non-normal with fat tails and high kurtosis, thus rendering Value at Risk (VaR) and other mean-variance analysis methods unsuitable for hedge fund risk quantification. This paper looks at some unique concerns for hedge fund risk management and will particularly concentrate on two approaches from physical world to model the non-linearities and dynamic correlations in hedge fund portfolio returns: Self Organizing Criticality (SOC) and Random Matrix Theory (RMT).Random Matrix Theory analyzes correlation matrix between different hedge fund styles and filters random noise from genuine correlations arising from interactions within the system. As seen in the results of portfolio risk analysis, it leads to a better portfolio risk forecastability and thus to optimum allocation of resources to different hedge fund styles. The results also prove the efficacy of self-organized criticality and implied portfolio correlation as a tool for risk management and style selection for portfolios of hedge funds, being particularly effective during non-linear market crashes.

  19. Random matrix theory analysis of cross-correlations in the US stock market: Evidence from Pearson’s correlation coefficient and detrended cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan

    2013-09-01

    In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.

  20. Analysis of stock prices of mining business

    NASA Astrophysics Data System (ADS)

    Ahn, Sanghyun; Lim, G. C.; Kim, S. H.; Kim, Soo Yong; Yoon, Kwon Youb; Stanfield, Joseph Lee; Kim, Kyungsik

    2011-06-01

    Stock exchanges have a diversity of so-called business groups and much evidence has been presented by covariance matrix analysis (Laloux et al. (1999) [6], Plerou et al. (2002) [7], Plerou et al. (1999) [8], Mantegna (1999) [9], Utsugi et al. (2004) [21] and Lim et al. (2009) [26]). A market-wide effect plays a crucial role in shifting the correlation structure from random to non-random. In this work, we study the structural properties of stocks related to the mining industry, especially rare earth minerals, listed on two exchanges, namely the TSX (Toronto stock exchange) and the TSX-V (Toronto stock exchange-ventures). In general, raw-material businesses are sensitively affected by the global economy while each firm has its own cycle. We prove that the global crisis during 2006-2009 affected the mineral market considerably. These two aspects compete to control price fluctuations. We show that the internal cycle overwhelms the global economic environment in terms of random matrix theory and overlapping matrices. However, during the period of 2006-2009, the effect of the global economic environment emerges. This result is well explained by the recent global financial/economic crisis. For comparison, we analyze the time stability of business clusters of the KOSPI, that is, the electric/electronic business, using an overlapping matrix. A clear difference in behavior is confirmed. Consequently, rare earth minerals in the raw-material business should be classified not by standard business classifications but by the internal cycle of business.

  1. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  2. Unifying model for random matrix theory in arbitrary space dimensions

    NASA Astrophysics Data System (ADS)

    Cicuta, Giovanni M.; Krausser, Johannes; Milkus, Rico; Zaccone, Alessio

    2018-03-01

    A sparse random block matrix model suggested by the Hessian matrix used in the study of elastic vibrational modes of amorphous solids is presented and analyzed. By evaluating some moments, benchmarked against numerics, differences in the eigenvalue spectrum of this model in different limits of space dimension d , and for arbitrary values of the lattice coordination number Z , are shown and discussed. As a function of these two parameters (and their ratio Z /d ), the most studied models in random matrix theory (Erdos-Renyi graphs, effective medium, and replicas) can be reproduced in the various limits of block dimensionality d . Remarkably, the Marchenko-Pastur spectral density (which is recovered by replica calculations for the Laplacian matrix) is reproduced exactly in the limit of infinite size of the blocks, or d →∞ , which clarifies the physical meaning of space dimension in these models. We feel that the approximate results for d =3 provided by our method may have many potential applications in the future, from the vibrational spectrum of glasses and elastic networks to wave localization, disordered conductors, random resistor networks, and random walks.

  3. The feasibility and stability of large complex biological networks: a random matrix approach.

    PubMed

    Stone, Lewi

    2018-05-29

    In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.

  4. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    PubMed

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  5. MIXREG: a computer program for mixed-effects regression analysis with autocorrelated errors.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-05-01

    MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors. This model can be used for analysis of unbalanced longitudinal data, where individuals may be measured at a different number of timepoints, or even at different timepoints. Autocorrelated errors of a general form or following an AR(1), MA(1), or ARMA(1,1) form are allowable. This model can also be used for analysis of clustered data, where the mixed-effects model assumes data within clusters are dependent. The degree of dependency is estimated jointly with estimates of the usual model parameters, thus adjusting for clustering. MIXREG uses maximum marginal likelihood estimation, utilizing both the EM algorithm and a Fisher-scoring solution. For the scoring solution, the covariance matrix of the random effects is expressed in its Gaussian decomposition, and the diagonal matrix reparameterized using the exponential transformation. Estimation of the individual random effects is accomplished using an empirical Bayes approach. Examples illustrating usage and features of MIXREG are provided.

  6. Finite-time scaling at the Anderson transition for vibrations in solids

    NASA Astrophysics Data System (ADS)

    Beltukov, Y. M.; Skipetrov, S. E.

    2017-11-01

    A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.

  7. Analysis of world terror networks from the reduced Google matrix of Wikipedia

    NASA Astrophysics Data System (ADS)

    El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2018-01-01

    We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.

  8. Numerical simulation of elasto-plastic deformation of composites: evolution of stress microfields and implications for homogenization models

    NASA Astrophysics Data System (ADS)

    González, C.; Segurado, J.; LLorca, J.

    2004-07-01

    The deformation of a composite made up of a random and homogeneous dispersion of elastic spheres in an elasto-plastic matrix was simulated by the finite element analysis of three-dimensional multiparticle cubic cells with periodic boundary conditions. "Exact" results (to a few percent) in tension and shear were determined by averaging 12 stress-strain curves obtained from cells containing 30 spheres, and they were compared with the predictions of secant homogenization models. In addition, the numerical simulations supplied detailed information of the stress microfields, which was used to ascertain the accuracy and the limitations of the homogenization models to include the nonlinear deformation of the matrix. It was found that secant approximations based on the volume-averaged second-order moment of the matrix stress tensor, combined with a highly accurate linear homogenization model, provided excellent predictions of the composite response when the matrix strain hardening rate was high. This was not the case, however, in composites which exhibited marked plastic strain localization in the matrix. The analysis of the evolution of the matrix stresses revealed that better predictions of the composite behavior can be obtained with new homogenization models which capture the essential differences in the stress carried by the elastic and plastic regions in the matrix at the onset of plastic deformation.

  9. Prognostic interaction patterns in diabetes mellitus II: A random-matrix-theory relation

    NASA Astrophysics Data System (ADS)

    Rai, Aparna; Pawar, Amit Kumar; Jalan, Sarika

    2015-08-01

    We analyze protein-protein interactions in diabetes mellitus II and its normal counterpart under the combined framework of random matrix theory and network biology. This disease is the fifth-leading cause of death in high-income countries and an epidemic in developing countries, affecting around 8 % of the total adult population in the world. Treatment at the advanced stage is difficult and challenging, making early detection a high priority in the cure of the disease. Our investigation reveals specific structural patterns important for the occurrence of the disease. In addition to the structural parameters, the spectral properties reveal the top contributing nodes from localized eigenvectors, which turn out to be significant for the occurrence of the disease. Our analysis is time-efficient and cost-effective, bringing a new horizon in the field of medicine by highlighting major pathways involved in the disease. The analysis provides a direction for the development of novel drugs and therapies in curing the disease by targeting specific interaction patterns instead of a single protein.

  10. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhary, Kenny; Najm, Habib N.

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  11. Bayesian estimation of Karhunen–Loève expansions; A random subspace approach

    DOE PAGES

    Chowdhary, Kenny; Najm, Habib N.

    2016-04-13

    One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less

  12. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  13. [Three-dimensional parallel collagen scaffold promotes tendon extracellular matrix formation].

    PubMed

    Zheng, Zefeng; Shen, Weiliang; Le, Huihui; Dai, Xuesong; Ouyang, Hongwei; Chen, Weishan

    2016-03-01

    To investigate the effects of three-dimensional parallel collagen scaffold on the cell shape, arrangement and extracellular matrix formation of tendon stem cells. Parallel collagen scaffold was fabricated by unidirectional freezing technique, while random collagen scaffold was fabricated by freeze-drying technique. The effects of two scaffolds on cell shape and extracellular matrix formation were investigated in vitro by seeding tendon stem/progenitor cells and in vivo by ectopic implantation. Parallel and random collagen scaffolds were produced successfully. Parallel collagen scaffold was more akin to tendon than random collagen scaffold. Tendon stem/progenitor cells were spindle-shaped and unified orientated in parallel collagen scaffold, while cells on random collagen scaffold had disorder orientation. Two weeks after ectopic implantation, cells had nearly the same orientation with the collagen substance. In parallel collagen scaffold, cells had parallel arrangement, and more spindly cells were observed. By contrast, cells in random collagen scaffold were disorder. Parallel collagen scaffold can induce cells to be in spindly and parallel arrangement, and promote parallel extracellular matrix formation; while random collagen scaffold can induce cells in random arrangement. The results indicate that parallel collagen scaffold is an ideal structure to promote tendon repairing.

  14. Spectrum of walk matrix for Koch network and its application

    NASA Astrophysics Data System (ADS)

    Xie, Pinchen; Lin, Yuan; Zhang, Zhongzhi

    2015-06-01

    Various structural and dynamical properties of a network are encoded in the eigenvalues of walk matrix describing random walks on the network. In this paper, we study the spectra of walk matrix of the Koch network, which displays the prominent scale-free and small-world features. Utilizing the particular architecture of the network, we obtain all the eigenvalues and their corresponding multiplicities. Based on the link between the eigenvalues of walk matrix and random target access time defined as the expected time for a walker going from an arbitrary node to another one selected randomly according to the steady-state distribution, we then derive an explicit solution to the random target access time for random walks on the Koch network. Finally, we corroborate our computation for the eigenvalues by enumerating spanning trees in the Koch network, using the connection governing eigenvalues and spanning trees, where a spanning tree of a network is a subgraph of the network, that is, a tree containing all the nodes.

  15. Global financial indices and twitter sentiment: A random matrix theory approach

    NASA Astrophysics Data System (ADS)

    García, A.

    2016-11-01

    We use Random Matrix Theory (RMT) approach to analyze the correlation matrix structure of a collection of public tweets and the corresponding return time series associated to 20 global financial indices along 7 trading months of 2014. In order to quantify the collection of tweets, we constructed daily polarity time series from public tweets via sentiment analysis. The results from RMT analysis support the fact of the existence of true correlations between financial indices, polarities, and the mixture of them. Moreover, we found a good agreement between the temporal behavior of the extreme eigenvalues of both empirical data, and similar results were found when computing the inverse participation ratio, which provides an evidence about the emergence of common factors in global financial information whether we use the return or polarity data as a source. In addition, we found a very strong presumption that polarity Granger causes returns of an Indonesian index for a long range of lag trading days, whereas for Israel, South Korea, Australia, and Japan, the predictive information of returns is also presented but with less presumption. Our results suggest that incorporating polarity as a financial indicator may open up new insights to understand the collective and even individual behavior of global financial indices.

  16. Phase diagram of matrix compressed sensing

    NASA Astrophysics Data System (ADS)

    Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka

    2016-12-01

    In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.

  17. New poly(butylene succinate)/layered silicate nanocomposites: preparation and mechanical properties.

    PubMed

    Ray, Suprakas Sinha; Okamoto, Kazuaki; Maiti, Pralay; Okamoto, Masami

    2002-04-01

    New poly(butylene succinate) (PBS)/layered silicate nanocomposites have been successfully prepared by simple melt extrusion of PBS and octadecylammonium modified montmorillonite (C18-mmt) at 150 degrees C. The d-spacing of both C18-mmt and intercalated nanocomposites was investigated by wide-angle X-ray diffraction analysis. Bright-field transmission electron microscopic study showed several stacked silicate layers with random orientation in the PBS matrix. The intercalated nanocomposites exhibited remarkable improvement of mechanical properties in both solid and melt states as compared with that of PBS matrix without clay.

  18. Graphic matching based on shape contexts and reweighted random walks

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun

    2018-04-01

    Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.

  19. Influences of system uncertainties on the numerical transfer path analysis of engine systems

    NASA Astrophysics Data System (ADS)

    Acri, A.; Nijman, E.; Acri, A.; Offner, G.

    2017-10-01

    Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.

  20. Random matrices and the New York City subway system

    NASA Astrophysics Data System (ADS)

    Jagannath, Aukosh; Trogdon, Thomas

    2017-09-01

    We analyze subway arrival times in the New York City subway system. We find regimes where the gaps between trains are well modeled by (unitarily invariant) random matrix statistics and Poisson statistics. The departure from random matrix statistics is captured by the value of the Coulomb potential along the subway route. This departure becomes more pronounced as trains make more stops.

  1. A random matrix approach to language acquisition

    NASA Astrophysics Data System (ADS)

    Nicolaidis, A.; Kosmidis, Kosmas; Argyrakis, Panos

    2009-12-01

    Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N~exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity.

  2. Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.

    2014-12-01

    Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.

  3. Acellular dermal matrix graft with or without enamel matrix derivative for root coverage in smokers: a randomized clinical study.

    PubMed

    Alves, Luciana B; Costa, Priscila P; Scombatti de Souza, Sérgio Luís; de Moraes Grisi, Márcio F; Palioto, Daniela B; Taba, Mario; Novaes, Arthur B

    2012-04-01

    The aim of this randomized controlled clinical study was to compare the use of an acellular dermal matrix graft (ADMG) with or without the enamel matrix derivative (EMD) in smokers to evaluate which procedure would provide better root coverage. Nineteen smokers with bilateral Miller Class I or II gingival recessions ≥3 mm were selected. The test group was treated with an association of ADMG and EMD, and the control group with ADMG alone. Probing depth, relative clinical attachment level, gingival recession height, gingival recession width, keratinized tissue width and keratinized tissue thickness were evaluated before the surgeries and after 6 months. Wilcoxon test was used for the statistical analysis at significance level of 5%. No significant differences were found between groups in all parameters at baseline. The mean gain recession height between baseline and 6 months and the complete root coverage favored the test group (p = 0.042, p = 0.019 respectively). Smoking may negatively affect the results achieved through periodontal plastic procedures; however, the association of ADMG and EMD is beneficial in the root coverage of gingival recessions in smokers, 6 months after the surgery. © 2012 John Wiley & Sons A/S.

  4. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  5. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-01

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80 Gb ×45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits /s , with a failure probability less than 10-5. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  6. Three-Dimensional Electromagnetic Scattering from Layered Media with Rough Interfaces for Subsurface Radar Remote Sensing

    NASA Astrophysics Data System (ADS)

    Duan, Xueyang

    The objective of this dissertation is to develop forward scattering models for active microwave remote sensing of natural features represented by layered media with rough interfaces. In particular, soil profiles are considered, for which a model of electromagnetic scattering from multilayer rough surfaces with or without buried random media is constructed. Starting from a single rough surface, radar scattering is modeled using the stabilized extended boundary condition method (SEBCM). This method solves the long-standing instability issue of the classical EBCM, and gives three-dimensional full wave solutions over large ranges of surface roughnesses with higher computational efficiency than pure numerical solutions, e.g., method of moments (MoM). Based on this single surface solution, multilayer rough surface scattering is modeled using the scattering matrix approach and the model is used for a comprehensive sensitivity analysis of the total ground scattering as a function of layer separation, subsurface statistics, and sublayer dielectric properties. The buried inhomogeneities such as rocks and vegetation roots are considered for the first time in the forward scattering model. Radar scattering from buried random media is modeled by the aggregate transition matrix using either the recursive transition matrix approach for spherical or short-length cylindrical scatterers, or the generalized iterative extended boundary condition method we developed for long cylinders or root-like cylindrical clusters. These approaches take the field interactions among scatterers into account with high computational efficiency. The aggregate transition matrix is transformed to a scattering matrix for the full solution to the layered-medium problem. This step is based on the near-to-far field transformation of the numerical plane wave expansion of the spherical harmonics and the multipole expansion of plane waves. This transformation consolidates volume scattering from the buried random medium with the scattering from layered structure in general. Combined with scattering from multilayer rough surfaces, scattering contributions from subsurfaces and vegetation roots can be then simulated. Solutions of both the rough surface scattering and random media scattering are validated numerically, experimentally, or both. The experimental validations have been carried out using a laboratory-based transmit-receive system for scattering from random media and a new bistatic tower-mounted radar system for field-based surface scattering measurements.

  7. The breast reconstruction evaluation of acellular dermal matrix as a sling trial (BREASTrial): design and methods of a prospective randomized trial.

    PubMed

    Agarwal, Jayant P; Mendenhall, Shaun D; Anderson, Layla A; Ying, Jian; Boucher, Kenneth M; Liu, Ting; Neumayer, Leigh A

    2015-01-01

    Recent literature has focused on the advantages and disadvantages of using acellular dermal matrix in breast reconstruction. Many of the reported data are from low level-of-evidence studies, leaving many questions incompletely answered. The present randomized trial provides high-level data on the incidence and severity of complications in acellular dermal matrix breast reconstruction between two commonly used types of acellular dermal matrix. A prospective randomized trial was conducted to compare outcomes of immediate staged tissue expander breast reconstruction using either AlloDerm or DermaMatrix. The impact of body mass index, smoking, diabetes, mastectomy type, radiation therapy, and chemotherapy on outcomes was analyzed. Acellular dermal matrix biointegration was analyzed clinically and histologically. Patient satisfaction was assessed by means of preoperative and postoperative surveys. Logistic regression models were used to identify predictors of complications. This article reports on the study design, surgical technique, patient characteristics, and preoperative survey results, with outcomes data in a separate report. After 2.5 years, we successfully enrolled and randomized 128 patients (199 breasts). The majority of patients were healthy nonsmokers, with 41 percent of patients receiving radiation therapy and 49 percent receiving chemotherapy. Half of the mastectomies were prophylactic, with nipple-sparing mastectomy common in both cancer and prophylactic cases. Preoperative survey results indicate that patients were satisfied with their premastectomy breast reconstruction education. Results from the Breast Reconstruction Evaluation Using Acellular Dermal Matrix as a Sling Trial will assist plastic surgeons in making evidence-based decisions regarding acellular dermal matrix-assisted tissue expander breast reconstruction. Therapeutic, II.

  8. Meta-Analysis on Randomized Controlled Trials of Vaccines with QS-21 or ISCOMATRIX Adjuvant: Safety and Tolerability

    PubMed Central

    Bigaeva, Emilia; van Doorn, Eva; Liu, Heng; Hak, Eelko

    2016-01-01

    Background and Objectives QS-21 shows in vitro hemolytic effect and causes side effects in vivo. New saponin adjuvant formulations with better toxicity profiles are needed. This study aims to evaluate the safety and tolerability of QS-21 and the improved saponin adjuvants (ISCOM, ISCOMATRIX and Matrix-M™) from vaccine trials. Methods A systematic literature search was conducted from MEDLINE, EMBASE, Cochrane library and Clinicaltrials.gov. We selected for the meta-analysis randomized controlled trials (RCTs) of vaccines adjuvanted with QS-21, ISCOM, ISCOMATRIX or Matrix-M™, which included a placebo control group and reported safety outcomes. Pooled risk ratios (RRs) and their 95% confidence intervals (CIs) were calculated using a random-effects model. Jadad scale was used to assess the study quality. Results Nine RCTs were eligible for the meta-analysis: six trials on QS-21-adjuvanted vaccines and three trials on ISCOMATRIX-adjuvanted, with 907 patients in total. There were no studies on ISCOM or Matrix-M™ adjuvanted vaccines matching the inclusion criteria. Meta-analysis identified an increased risk for diarrhea in patients receiving QS21-adjuvanted vaccines (RR 2.55, 95% CI 1.04–6.24). No increase in the incidence of the reported systemic AEs was observed for ISCOMATRIX-adjuvanted vaccines. QS-21- and ISCOMATRIX-adjuvanted vaccines caused a significantly higher incidence of injection site pain (RR 4.11, 95% CI 1.10–15.35 and RR 2.55, 95% CI 1.41–4.59, respectively). ISCOMATRIX-adjuvanted vaccines also increased the incidence of injection site swelling (RR 3.43, 95% CI 1.08–10.97). Conclusions Our findings suggest that vaccines adjuvanted with either QS-21 or ISCOMATRIX posed no specific safety concern. Furthermore, our results indicate that the use of ISCOMATRIX enables a better systemic tolerability profile when compared to the use of QS-21. However, no better local tolerance was observed for ISCOMATRIX-adjuvanted vaccines in immunized non-healthy subjects. This meta-analysis is limited by the relatively small number of individuals recruited in the included trials, especially in the control groups. PMID:27149269

  9. Meta-Analysis on Randomized Controlled Trials of Vaccines with QS-21 or ISCOMATRIX Adjuvant: Safety and Tolerability.

    PubMed

    Bigaeva, Emilia; Doorn, Eva van; Liu, Heng; Hak, Eelko

    2016-01-01

    QS-21 shows in vitro hemolytic effect and causes side effects in vivo. New saponin adjuvant formulations with better toxicity profiles are needed. This study aims to evaluate the safety and tolerability of QS-21 and the improved saponin adjuvants (ISCOM, ISCOMATRIX and Matrix-M™) from vaccine trials. A systematic literature search was conducted from MEDLINE, EMBASE, Cochrane library and Clinicaltrials.gov. We selected for the meta-analysis randomized controlled trials (RCTs) of vaccines adjuvanted with QS-21, ISCOM, ISCOMATRIX or Matrix-M™, which included a placebo control group and reported safety outcomes. Pooled risk ratios (RRs) and their 95% confidence intervals (CIs) were calculated using a random-effects model. Jadad scale was used to assess the study quality. Nine RCTs were eligible for the meta-analysis: six trials on QS-21-adjuvanted vaccines and three trials on ISCOMATRIX-adjuvanted, with 907 patients in total. There were no studies on ISCOM or Matrix-M™ adjuvanted vaccines matching the inclusion criteria. Meta-analysis identified an increased risk for diarrhea in patients receiving QS21-adjuvanted vaccines (RR 2.55, 95% CI 1.04-6.24). No increase in the incidence of the reported systemic AEs was observed for ISCOMATRIX-adjuvanted vaccines. QS-21- and ISCOMATRIX-adjuvanted vaccines caused a significantly higher incidence of injection site pain (RR 4.11, 95% CI 1.10-15.35 and RR 2.55, 95% CI 1.41-4.59, respectively). ISCOMATRIX-adjuvanted vaccines also increased the incidence of injection site swelling (RR 3.43, 95% CI 1.08-10.97). Our findings suggest that vaccines adjuvanted with either QS-21 or ISCOMATRIX posed no specific safety concern. Furthermore, our results indicate that the use of ISCOMATRIX enables a better systemic tolerability profile when compared to the use of QS-21. However, no better local tolerance was observed for ISCOMATRIX-adjuvanted vaccines in immunized non-healthy subjects. This meta-analysis is limited by the relatively small number of individuals recruited in the included trials, especially in the control groups.

  10. Partial transpose of random quantum states: Exact formulas and meanders

    NASA Astrophysics Data System (ADS)

    Fukuda, Motohisa; Śniady, Piotr

    2013-04-01

    We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.

  11. RANDOMNESS of Numbers DEFINITION(QUERY:WHAT? V HOW?) ONLY Via MAXWELL-BOLTZMANN CLASSICAL-Statistics(MBCS) Hot-Plasma VS. Digits-Clumping Log-Law NON-Randomness Inversion ONLY BOSE-EINSTEIN QUANTUM-Statistics(BEQS) .

    NASA Astrophysics Data System (ADS)

    Siegel, Z.; Siegel, Edward Carl-Ludwig

    2011-03-01

    RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!

  12. QCD-inspired spectra from Blue's functions

    NASA Astrophysics Data System (ADS)

    Nowak, Maciej A.; Papp, Gábor; Zahed, Ismail

    1996-02-01

    We use the law of addition in random matrix theory to analyze the spectral distributions of a variety of chiral random matrix models as inspired from QCD whether through symmetries or models. In terms of the Blue's functions recently discussed by Zee, we show that most of the spectral distributions in the macroscopic limit and the quenched approximation, follow algebraically from the discontinuity of a pertinent solution to a cubic (Cardano) or a quartic (Ferrari) equation. We use the end-point equation of the energy spectra in chiral random matrix models to argue for novel phase structures, in which the Dirac density of states plays the role of an order parameter.

  13. Universality in chaos: Lyapunov spectrum and random matrix theory.

    PubMed

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  14. Universality in chaos: Lyapunov spectrum and random matrix theory

    NASA Astrophysics Data System (ADS)

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  15. Deformation analysis of boron/aluminum specimens by moire interferometry

    NASA Technical Reports Server (NTRS)

    Post, Daniel; Guo, Yifan; Czarnek, Robert

    1989-01-01

    Whole-field surface deformations were measured for two slotted tension specimens from multiply laminates, one with 0 deg fiber orientation in the surface ply and the other with 45 deg orientation. Macromechanical and micromechanical details were revealed using high-sensitivity moire interferometry. Although global deformations of all plies were essentially equal, numerous random or anomalous features were observed. Local deformations of adjacent 0 deg and 45 deg plies were very different, both near the slot and remote from it, requiring large interlaminar shear strains for continuity. Shear strains were concentrated in the aluminum matrix. For 45 deg plies, a major portion of the deformation was by shear; large plastic slip of matrix occurred at random locations in 45 deg plies, wherein groups of fibers slipped relative to other groups. Shear strains in the interior, between adjacent fibers, were larger than the measured surface strains.

  16. Bayes linear covariance matrix adjustment

    NASA Astrophysics Data System (ADS)

    Wilkinson, Darren J.

    1995-12-01

    In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.

  17. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  18. Analysis of cross-correlations between financial markets after the 2008 crisis

    NASA Astrophysics Data System (ADS)

    Sensoy, A.; Yuksel, S.; Erturk, M.

    2013-10-01

    We analyze the cross-correlation matrix C of the index returns of the main financial markets after the 2008 crisis using methods of random matrix theory. We test the eigenvalues of C for universal properties of random matrices and find that the majority of the cross-correlation coefficients arise from randomness. We show that the eigenvector of the largest deviating eigenvalue of C represents a global market itself. We reveal that high volatility of financial markets is observed at the same times with high correlations between them which lowers the risk diversification potential even if one constructs a widely internationally diversified portfolio of stocks. We identify and compare the connection and cluster structure of markets before and after the crisis using minimal spanning and ultrametric hierarchical trees. We find that after the crisis, the co-movement degree of the markets increases. We also highlight the key financial markets of pre and post crisis using main centrality measures and analyze the changes. We repeat the study using rank correlation and compare the differences. Further implications are discussed.

  19. Asymmetric correlation matrices: an analysis of financial data

    NASA Astrophysics Data System (ADS)

    Livan, G.; Rebecchi, L.

    2012-06-01

    We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.

  20. Accurate Quasiparticle Spectra from the T-Matrix Self-Energy and the Particle-Particle Random Phase Approximation.

    PubMed

    Zhang, Du; Su, Neil Qiang; Yang, Weitao

    2017-07-20

    The GW self-energy, especially G 0 W 0 based on the particle-hole random phase approximation (phRPA), is widely used to study quasiparticle (QP) energies. Motivated by the desirable features of the particle-particle (pp) RPA compared to the conventional phRPA, we explore the pp counterpart of GW, that is, the T-matrix self-energy, formulated with the eigenvectors and eigenvalues of the ppRPA matrix. We demonstrate the accuracy of the T-matrix method for molecular QP energies, highlighting the importance of the pp channel for calculating QP spectra.

  1. Noise sensitivity of portfolio selection in constant conditional correlation GARCH models

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, I.; Kondor, I.

    2007-11-01

    This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.

  2. Randomized Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Erichson, N. Benjamin; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    The dynamic mode decomposition (DMD) is an equation-free, data-driven matrix decomposition that is capable of providing accurate reconstructions of spatio-temporal coherent structures arising in dynamical systems. We present randomized algorithms to compute the near-optimal low-rank dynamic mode decomposition for massive datasets. Randomized algorithms are simple, accurate and able to ease the computational challenges arising with `big data'. Moreover, randomized algorithms are amenable to modern parallel and distributed computing. The idea is to derive a smaller matrix from the high-dimensional input data matrix using randomness as a computational strategy. Then, the dynamic modes and eigenvalues are accurately learned from this smaller representation of the data, whereby the approximation quality can be controlled via oversampling and power iterations. Here, we present randomized DMD algorithms that are categorized by how many passes the algorithm takes through the data. Specifically, the single-pass randomized DMD does not require data to be stored for subsequent passes. Thus, it is possible to approximately decompose massive fluid flows (stored out of core memory, or not stored at all) using single-pass algorithms, which is infeasible with traditional DMD algorithms.

  3. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  4. High-Speed Device-Independent Quantum Random Number Generation without a Detection Loophole.

    PubMed

    Liu, Yang; Yuan, Xiao; Li, Ming-Han; Zhang, Weijun; Zhao, Qi; Zhong, Jiaqiang; Cao, Yuan; Li, Yu-Huai; Chen, Luo-Kan; Li, Hao; Peng, Tianyi; Chen, Yu-Ao; Peng, Cheng-Zhi; Shi, Sheng-Cai; Wang, Zhen; You, Lixing; Ma, Xiongfeng; Fan, Jingyun; Zhang, Qiang; Pan, Jian-Wei

    2018-01-05

    Quantum mechanics provides the means of generating genuine randomness that is impossible with deterministic classical processes. Remarkably, the unpredictability of randomness can be certified in a manner that is independent of implementation devices. Here, we present an experimental study of device-independent quantum random number generation based on a detection-loophole-free Bell test with entangled photons. In the randomness analysis, without the independent identical distribution assumption, we consider the worst case scenario that the adversary launches the most powerful attacks against the quantum adversary. After considering statistical fluctuations and applying an 80  Gb×45.6  Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114  bits/s, with a failure probability less than 10^{-5}. This marks a critical step towards realistic applications in cryptography and fundamental physics tests.

  5. Random matrix approach to plasmon resonances in the random impedance network model of disordered nanocomposites

    NASA Astrophysics Data System (ADS)

    Olekhno, N. A.; Beltukov, Y. M.

    2018-05-01

    Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0

  6. Spectral statistics of random geometric graphs

    NASA Astrophysics Data System (ADS)

    Dettmann, C. P.; Georgiou, O.; Knight, G.

    2017-04-01

    We use random matrix theory to study the spectrum of random geometric graphs, a fundamental model of spatial networks. Considering ensembles of random geometric graphs we look at short-range correlations in the level spacings of the spectrum via the nearest-neighbour and next-nearest-neighbour spacing distribution and long-range correlations via the spectral rigidity Δ3 statistic. These correlations in the level spacings give information about localisation of eigenvectors, level of community structure and the level of randomness within the networks. We find a parameter-dependent transition between Poisson and Gaussian orthogonal ensemble statistics. That is the spectral statistics of spatial random geometric graphs fits the universality of random matrix theory found in other models such as Erdős-Rényi, Barabási-Albert and Watts-Strogatz random graphs.

  7. Near-optimal matrix recovery from random linear measurements.

    PubMed

    Romanov, Elad; Gavish, Matan

    2018-06-25

    In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.

  8. The Shock and Vibration Bulletin. Part 3. Dynamic Analysis, Design Techniques

    DTIC Science & Technology

    1980-09-01

    response at certain discrete frequen- nique for dynamic analysis was pioneered by cies, not over a random-frequence spectrum. Myklestad[l]. Later Pestel and...34Fundamentals of Vibra- v’ angle of rotation due to tion Analysis ," McGraw-Hill, New York, 1956. bending 2. E.C. Pestel and F.A. Leckie, "Matrix o’ angle of...Bulletin 50IC FILE COPY (Part 03ofP,) to THE SHOCK AND VIBRATION BULLETIN Part 3 Dynamic Analysis , Design Techniques IELECTE SEPTEMBER 1980 S NOV 1

  9. Role of vertex corrections in the matrix formulation of the random phase approximation for the multiorbital Hubbard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altmeyer, Michaela; Guterding, Daniel; Hirschfeld, P. J.

    2016-12-21

    In the framework of a multiorbital Hubbard model description of superconductivity, a matrix formulation of the superconducting pairing interaction that has been widely used is designed to treat spin, charge, and orbital fluctuations within a random phase approximation (RPA). In terms of Feynman diagrams, this takes into account particle-hole ladder and bubble contributions as expected. It turns out, however, that this matrix formulation also generates additional terms which have the diagrammatic structure of vertex corrections. Furthermore we examine these terms and discuss the relationship between the matrix-RPA superconducting pairing interaction and the Feynman diagrams that it sums.

  10. Constructing acoustic timefronts using random matrix theory.

    PubMed

    Hegewisch, Katherine C; Tomsovic, Steven

    2013-10-01

    In a recent letter [Hegewisch and Tomsovic, Europhys. Lett. 97, 34002 (2012)], random matrix theory is introduced for long-range acoustic propagation in the ocean. The theory is expressed in terms of unitary propagation matrices that represent the scattering between acoustic modes due to sound speed fluctuations induced by the ocean's internal waves. The scattering exhibits a power-law decay as a function of the differences in mode numbers thereby generating a power-law, banded, random unitary matrix ensemble. This work gives a more complete account of that approach and extends the methods to the construction of an ensemble of acoustic timefronts. The result is a very efficient method for studying the statistical properties of timefronts at various propagation ranges that agrees well with propagation based on the parabolic equation. It helps identify which information about the ocean environment can be deduced from the timefronts and how to connect features of the data to that environmental information. It also makes direct connections to methods used in other disordered waveguide contexts where the use of random matrix theory has a multi-decade history.

  11. Random walks with long-range steps generated by functions of Laplacian matrices

    NASA Astrophysics Data System (ADS)

    Riascos, A. P.; Michelitsch, T. M.; Collet, B. A.; Nowakowski, A. F.; Nicolleau, F. C. G. A.

    2018-04-01

    In this paper, we explore different Markovian random walk strategies on networks with transition probabilities between nodes defined in terms of functions of the Laplacian matrix. We generalize random walk strategies with local information in the Laplacian matrix, that describes the connections of a network, to a dynamic determined by functions of this matrix. The resulting processes are non-local allowing transitions of the random walker from one node to nodes beyond its nearest neighbors. We find that only two types of Laplacian functions are admissible with distinct behaviors for long-range steps in the infinite network limit: type (i) functions generate Brownian motions, type (ii) functions Lévy flights. For this asymptotic long-range step behavior only the lowest non-vanishing order of the Laplacian function is relevant, namely first order for type (i), and fractional order for type (ii) functions. In the first part, we discuss spectral properties of the Laplacian matrix and a series of relations that are maintained by a particular type of functions that allow to define random walks on any type of undirected connected networks. Once described general properties, we explore characteristics of random walk strategies that emerge from particular cases with functions defined in terms of exponentials, logarithms and powers of the Laplacian as well as relations of these dynamics with non-local strategies like Lévy flights and fractional transport. Finally, we analyze the global capacity of these random walk strategies to explore networks like lattices and trees and different types of random and complex networks.

  12. Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei

    2018-04-01

    We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.

  13. Analysis of the expected density of internal equilibria in random evolutionary multi-player multi-strategy games.

    PubMed

    Duong, Manh Hong; Han, The Anh

    2016-12-01

    In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.

  14. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  15. A numerical approximation to the elastic properties of sphere-reinforced composites

    NASA Astrophysics Data System (ADS)

    Segurado, J.; Llorca, J.

    2002-10-01

    Three-dimensional cubic unit cells containing 30 non-overlapping identical spheres randomly distributed were generated using a new, modified random sequential adsortion algorithm suitable for particle volume fractions of up to 50%. The elastic constants of the ensemble of spheres embedded in a continuous and isotropic elastic matrix were computed through the finite element analysis of the three-dimensional periodic unit cells, whose size was chosen as a compromise between the minimum size required to obtain accurate results in the statistical sense and the maximum one imposed by the computational cost. Three types of materials were studied: rigid spheres and spherical voids in an elastic matrix and a typical composite made up of glass spheres in an epoxy resin. The moduli obtained for different unit cells showed very little scatter, and the average values obtained from the analysis of four unit cells could be considered very close to the "exact" solution to the problem, in agreement with the results of Drugan and Willis (J. Mech. Phys. Solids 44 (1996) 497) referring to the size of the representative volume element for elastic composites. They were used to assess the accuracy of three classical analytical models: the Mori-Tanaka mean-field analysis, the generalized self-consistent method, and Torquato's third-order approximation.

  16. Localization in covariance matrices of coupled heterogenous Ornstein-Uhlenbeck processes

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2014-12-01

    We define a random-matrix ensemble given by the infinite-time covariance matrices of Ornstein-Uhlenbeck processes at different temperatures coupled by a Gaussian symmetric matrix. The spectral properties of this ensemble are shown to be in qualitative agreement with some stylized facts of financial markets. Through the presented model formulas are given for the analysis of heterogeneous time series. Furthermore evidence for a localization transition in eigenvectors related to small and large eigenvalues in cross-correlations analysis of this model is found, and a simple explanation of localization phenomena in financial time series is provided. Finally we identify both in our model and in real financial data an inverted-bell effect in correlation between localized components and their local temperature: high- and low-temperature components are the most localized ones.

  17. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix and Polymer Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.

    2016-01-01

    Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.

  18. Random lasing in dye-doped polymer dispersed liquid crystal film

    NASA Astrophysics Data System (ADS)

    Wu, Rina; Shi, Rui-xin; Wu, Xiaojiao; Wu, Jie; Dai, Qin

    2016-09-01

    A dye-doped polymer-dispersed liquid crystal film was designed and fabricated, and random lasing action was studied. A mixture of laser dye, nematic liquid crystal, chiral dopant, and PVA was used to prepare the dye-doped polymer-dispersed liquid crystal film by means of microcapsules. Scanning electron microscopy analysis showed that most liquid crystal droplets in the polymer matrix ranged from 30 μm to 40 μm, the size of the liquid crystal droplets was small. Under frequency doubled 532 nm Nd:YAG laser-pumped optical excitation, a plurality of discrete and sharp random laser radiation peaks could be measured in the range of 575-590 nm. The line-width of the lasing peak was 0.2 nm and the threshold of the random lasing was 9 mJ. Under heating, the emission peaks of random lasing disappeared. By detecting the emission light spot energy distribution, the mechanism of radiation was found to be random lasing. The random lasing radiation mechanism was then analyzed and discussed. Experimental results indicated that the size of the liquid crystal droplets is the decisive factor that influences the lasing mechanism. The surface anchor role can be ignored when the size of the liquid crystal droplets in the polymer matrix is small, which is beneficial to form multiple scattering. The transmission path of photons is similar to that in a ring cavity, providing feedback to obtain random lasing output. Project supported by the National Natural Science Foundation of China (Grant No. 61378042), the Colleges and Universities in Liaoning Province Outstanding Young Scholars Growth Plans, China (Grant No. LJQ2015093), and Shenyang Ligong University Laser and Optical Information of Liaoning Province Key Laboratory Open Funds, China.

  19. On the equilibrium state of a small system with random matrix coupling to its environment

    NASA Astrophysics Data System (ADS)

    Lebowitz, J. L.; Pastur, L.

    2015-07-01

    We consider a random matrix model of interaction between a small n-level system, S, and its environment, a N-level heat reservoir, R. The interaction between S and R is modeled by a tensor product of a fixed n× n matrix and a N× N Hermitian random matrix. We show that under certain ‘macroscopicity’ conditions on R, the reduced density matrix of the system {{ρ }S}=T{{r}R}ρ S\\cup R(eq), is given by ρ S(c)˜ exp \\{-β {{H}S}\\}, where HS is the Hamiltonian of the isolated system. This holds for all strengths of the interaction and thus gives some justification for using ρ S(c) to describe some nano-systems, like biopolymers, in equilibrium with their environment (Seifert 2012 Rep. Prog. Phys. 75 126001). Our results extend those obtained previously in (Lebowitz and Pastur 2004 J. Phys. A: Math. Gen. 37 1517-34) (Lebowitz et al 2007 Contemporary Mathematics (Providence RI: American Mathematical Society) pp 199-218) for a special two-level system.

  20. Analysis of two dimensional signals via curvelet transform

    NASA Astrophysics Data System (ADS)

    Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.

    2007-04-01

    This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.

  1. Industrial entrepreneurial network: Structural and functional analysis

    NASA Astrophysics Data System (ADS)

    Medvedeva, M. A.; Davletbaev, R. H.; Berg, D. B.; Nazarova, J. J.; Parusheva, S. S.

    2016-12-01

    Structure and functioning of two model industrial entrepreneurial networks are investigated in the present paper. One of these networks is forming when implementing an integrated project and consists of eight agents, which interact with each other and external environment. The other one is obtained from the municipal economy and is based on the set of the 12 real business entities. Analysis of the networks is carried out on the basis of the matrix of mutual payments aggregated over the certain time period. The matrix is created by the methods of experimental economics. Social Network Analysis (SNA) methods and instruments were used in the present research. The set of basic structural characteristics was investigated: set of quantitative parameters such as density, diameter, clustering coefficient, different kinds of centrality, and etc. They were compared with the random Bernoulli graphs of the corresponding size and density. Discovered variations of random and entrepreneurial networks structure are explained by the peculiarities of agents functioning in production network. Separately, were identified the closed exchange circuits (cyclically closed contours of graph) forming an autopoietic (self-replicating) network pattern. The purpose of the functional analysis was to identify the contribution of the autopoietic network pattern in its gross product. It was found that the magnitude of this contribution is more than 20%. Such value allows using of the complementary currency in order to stimulate economic activity of network agents.

  2. Dynamical Analysis of Stock Market Instability by Cross-correlation Matrix

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2016-08-01

    We study stock market instability by using cross-correlations constructed from the return time series of 366 stocks traded on the Tokyo Stock Exchange from January 5, 1998 to December 30, 2013. To investigate the dynamical evolution of the cross-correlations, crosscorrelation matrices are calculated with a rolling window of 400 days. To quantify the volatile market stages where the potential risk is high, we apply the principal components analysis and measure the cumulative risk fraction (CRF), which is the system variance associated with the first few principal components. From the CRF, we detected three volatile market stages corresponding to the bankruptcy of Lehman Brothers, the 2011 Tohoku Region Pacific Coast Earthquake, and the FRB QE3 reduction observation in the study period. We further apply the random matrix theory for the risk analysis and find that the first eigenvector is more equally de-localized when the market is volatile.

  3. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  4. Note on coefficient matrices from stochastic Galerkin methods for random diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Tao, E-mail: tzhou@lsec.cc.ac.c; Tang Tao, E-mail: ttang@hkbu.edu.h

    2010-11-01

    In a recent work by Xiu and Shen [D. Xiu, J. Shen, Efficient stochastic Galerkin methods for random diffusion equations, J. Comput. Phys. 228 (2009) 266-281], the Galerkin methods are used to solve stochastic diffusion equations in random media, where some properties for the coefficient matrix of the resulting system are provided. They also posed an open question on the properties of the coefficient matrix. In this work, we will provide some results related to the open question.

  5. Nuclear matrix elements for 0νβ{sup −}β{sup −} decays: Comparative analysis of the QRPA, shell model and IBM predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Civitarese, Osvaldo; Suhonen, Jouni

    In this work we report on general properties of the nuclear matrix elements involved in the neutrinoless double β{sup −} decays (0νβ{sup −}β{sup −} decays) of several nuclei. A summary of the values of the NMEs calculated along the years by the Jyväskylä-La Plata collaboration is presented. These NMEs, calculated in the framework of the quasiparticle random phase approximation (QRPA), are compared with those of the other available calculations, like the Shell Model (ISM) and the interacting boson model (IBA-2)

  6. Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.

    2016-01-01

    The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.

  7. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  8. Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments

    NASA Technical Reports Server (NTRS)

    Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.

    1973-01-01

    A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.

  9. Method of confidence domains in the analysis of noise-induced extinction for tritrophic population system

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Ryazanova, Tatyana

    2017-09-01

    A problem of the analysis of the noise-induced extinction in multidimensional population systems is considered. For the investigation of conditions of the extinction caused by random disturbances, a new approach based on the stochastic sensitivity function technique and confidence domains is suggested, and applied to tritrophic population model of interacting prey, predator and top predator. This approach allows us to analyze constructively the probabilistic mechanisms of the transition to the noise-induced extinction from both equilibrium and oscillatory regimes of coexistence. In this analysis, a method of principal directions for the reducing of the dimension of confidence domains is suggested. In the dispersion of random states, the principal subspace is defined by the ratio of eigenvalues of the stochastic sensitivity matrix. A detailed analysis of two scenarios of the noise-induced extinction in dependence on parameters of considered tritrophic system is carried out.

  10. Group identification in Indonesian stock market

    NASA Astrophysics Data System (ADS)

    Nurriyadi Suparno, Ervano; Jo, Sung Kyun; Lim, Kyuseong; Purqon, Acep; Kim, Soo Yong

    2016-08-01

    The characteristic of Indonesian stock market is interesting especially because it represents developing countries. We investigate the dynamics and structures by using Random Matrix Theory (RMT). Here, we analyze the cross-correlation of the fluctuations of the daily closing price of stocks from the Indonesian Stock Exchange (IDX) between January 1, 2007, and October 28, 2014. The eigenvalue distribution of the correlation matrix consists of noise which is filtered out using the random matrix as a control. The bulk of the eigenvalue distribution conforms to the random matrix, allowing the separation of random noise from original data which is the deviating eigenvalues. From the deviating eigenvalues and the corresponding eigenvectors, we identify the intrinsic normal modes of the system and interpret their meaning based on qualitative and quantitative approach. The results show that the largest eigenvector represents the market-wide effect which has a predominantly common influence toward all stocks. The other eigenvectors represent highly correlated groups within the system. Furthermore, identification of the largest components of the eigenvectors shows the sector or background of the correlated groups. Interestingly, the result shows that there are mainly two clusters within IDX, natural and non-natural resource companies. We then decompose the correlation matrix to investigate the contribution of the correlated groups to the total correlation, and we find that IDX is still driven mainly by the market-wide effect.

  11. Tensor manifold-based extreme learning machine for 2.5-D face recognition

    NASA Astrophysics Data System (ADS)

    Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin

    2018-01-01

    We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.

  12. Designing Hyperchaotic Cat Maps With Any Desired Number of Positive Lyapunov Exponents.

    PubMed

    Hua, Zhongyun; Yi, Shuang; Zhou, Yicong; Li, Chengqing; Wu, Yue

    2018-02-01

    Generating chaotic maps with expected dynamics of users is a challenging topic. Utilizing the inherent relation between the Lyapunov exponents (LEs) of the Cat map and its associated Cat matrix, this paper proposes a simple but efficient method to construct an -dimensional ( -D) hyperchaotic Cat map (HCM) with any desired number of positive LEs. The method first generates two basic -D Cat matrices iteratively and then constructs the final -D Cat matrix by performing similarity transformation on one basic -D Cat matrix by the other. Given any number of positive LEs, it can generate an -D HCM with desired hyperchaotic complexity. Two illustrative examples of -D HCMs were constructed to show the effectiveness of the proposed method, and to verify the inherent relation between the LEs and Cat matrix. Theoretical analysis proves that the parameter space of the generated HCM is very large. Performance evaluations show that, compared with existing methods, the proposed method can construct -D HCMs with lower computation complexity and their outputs demonstrate strong randomness and complex ergodicity.

  13. Network analysis of a financial market based on genuine correlation and threshold method

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Shirazi, A. H.; Raei, R.; Jafari, G. R.

    2011-10-01

    A financial market is an example of an adaptive complex network consisting of many interacting units. This network reflects market’s behavior. In this paper, we use Random Matrix Theory (RMT) notion for specifying the largest eigenvector of correlation matrix as the market mode of stock network. For a better risk management, we clean the correlation matrix by removing the market mode from data and then construct this matrix based on the residuals. We show that this technique has an important effect on correlation coefficient distribution by applying it for Dow Jones Industrial Average (DJIA). To study the topological structure of a network we apply the removing market mode technique and the threshold method to Tehran Stock Exchange (TSE) as an example. We show that this network follows a power-law model in certain intervals. We also show the behavior of clustering coefficients and component numbers of this network for different thresholds. These outputs are useful for both theoretical and practical purposes such as asset allocation and risk management.

  14. Eigenvalue density of cross-correlations in Sri Lankan financial market

    NASA Astrophysics Data System (ADS)

    Nilantha, K. G. D. R.; Ranasinghe; Malmini, P. K. C.

    2007-05-01

    We apply the universal properties with Gaussian orthogonal ensemble (GOE) of random matrices namely spectral properties, distribution of eigenvalues, eigenvalue spacing predicted by random matrix theory (RMT) to compare cross-correlation matrix estimators from emerging market data. The daily stock prices of the Sri Lankan All share price index and Milanka price index from August 2004 to March 2005 were analyzed. Most eigenvalues in the spectrum of the cross-correlation matrix of stock price changes agree with the universal predictions of RMT. We find that the cross-correlation matrix satisfies the universal properties of the GOE of real symmetric random matrices. The eigen distribution follows the RMT predictions in the bulk but there are some deviations at the large eigenvalues. The nearest-neighbor spacing and the next nearest-neighbor spacing of the eigenvalues were examined and found that they follow the universality of GOE. RMT with deterministic correlations found that each eigenvalue from deterministic correlations is observed at values, which are repelled from the bulk distribution.

  15. Weighted network analysis of high-frequency cross-correlation measures

    NASA Astrophysics Data System (ADS)

    Iori, Giulia; Precup, Ovidiu V.

    2007-03-01

    In this paper we implement a Fourier method to estimate high-frequency correlation matrices from small data sets. The Fourier estimates are shown to be considerably less noisy than the standard Pearson correlation measures and thus capable of detecting subtle changes in correlation matrices with just a month of data. The evolution of correlation at different time scales is analyzed from the full correlation matrix and its minimum spanning tree representation. The analysis is performed by implementing measures from the theory of random weighted networks.

  16. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    NASA Astrophysics Data System (ADS)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  17. A random matrix approach to credit risk.

    PubMed

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  18. A Random Matrix Approach to Credit Risk

    PubMed Central

    Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864

  19. Symmetry Transition Preserving Chirality in QCD: A Versatile Random Matrix Model

    NASA Astrophysics Data System (ADS)

    Kanazawa, Takuya; Kieburg, Mario

    2018-06-01

    We consider a random matrix model which interpolates between the chiral Gaussian unitary ensemble and the Gaussian unitary ensemble while preserving chiral symmetry. This ensemble describes flavor symmetry breaking for staggered fermions in 3D QCD as well as in 4D QCD at high temperature or in 3D QCD at a finite isospin chemical potential. Our model is an Osborn-type two-matrix model which is equivalent to the elliptic ensemble but we consider the singular value statistics rather than the complex eigenvalue statistics. We report on exact results for the partition function and the microscopic level density of the Dirac operator in the ɛ regime of QCD. We compare these analytical results with Monte Carlo simulations of the matrix model.

  20. Levy Matrices and Financial Covariances

    NASA Astrophysics Data System (ADS)

    Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail

    2003-10-01

    In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.

  1. Automatic Trading Agent. RMT Based Portfolio Theory and Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Snarska, M.; Krzych, J.

    2006-11-01

    Portfolio theory is a very powerful tool in the modern investment theory. It is helpful in estimating risk of an investor's portfolio, arosen from lack of information, uncertainty and incomplete knowledge of reality, which forbids a perfect prediction of future price changes. Despite of many advantages this tool is not known and not widely used among investors on Warsaw Stock Exchange. The main reason for abandoning this method is a high level of complexity and immense calculations. The aim of this paper is to introduce an automatic decision-making system, which allows a single investor to use complex methods of Modern Portfolio Theory (MPT). The key tool in MPT is an analysis of an empirical covariance matrix. This matrix, obtained from historical data, biased by such a high amount of statistical uncertainty, that it can be seen as random. By bringing into practice the ideas of Random Matrix Theory (RMT), the noise is removed or significantly reduced, so the future risk and return are better estimated and controlled. These concepts are applied to the Warsaw Stock Exchange Simulator {http://gra.onet.pl}. The result of the simulation is 18% level of gains in comparison with respective 10% loss of the Warsaw Stock Exchange main index WIG.

  2. The supersymmetric method in random matrix theory and applications to QCD

    NASA Astrophysics Data System (ADS)

    Verbaarschot, Jacobus

    2004-12-01

    The supersymmetric method is a powerful method for the nonperturbative evaluation of quenched averages in disordered systems. Among others, this method has been applied to the statistical theory of S-matrix fluctuations, the theory of universal conductance fluctuations and the microscopic spectral density of the QCD Dirac operator. We start this series of lectures with a general review of Random Matrix Theory and the statistical theory of spectra. An elementary introduction of the supersymmetric method in Random Matrix Theory is given in the second and third lecture. We will show that a Random Matrix Theory can be rewritten as an integral over a supermanifold. This integral will be worked out in detail for the Gaussian Unitary Ensemble that describes level correlations in systems with broken time-reversal invariance. We especially emphasize the role of symmetries. As a second example of the application of the supersymmetric method we discuss the calculation of the microscopic spectral density of the QCD Dirac operator. This is the eigenvalue density near zero on the scale of the average level spacing which is known to be given by chiral Random Matrix Theory. Also in this case we use symmetry considerations to rewrite the generating function for the resolvent as an integral over a supermanifold. The main topic of the second last lecture is the recent developments on the relation between the supersymmetric partition function and integrable hierarchies (in our case the Toda lattice hierarchy). We will show that this relation is an efficient way to calculate superintegrals. Several examples that were given in previous lectures will be worked out by means of this new method. Finally, we will discuss the quenched QCD Dirac spectrum at nonzero chemical potential. Because of the nonhermiticity of the Dirac operator the usual supersymmetric method has not been successful in this case. However, we will show that the supersymmetric partition function can be evaluated by means of the replica limit of the Toda lattice equation.

  3. Measurement Matrix Design for Phase Retrieval Based on Mutual Information

    NASA Astrophysics Data System (ADS)

    Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.

    2018-01-01

    In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.

  4. A computational proposal for designing structured RNA pools for in vitro selection of RNAs.

    PubMed

    Kim, Namhee; Gan, Hin Hark; Schlick, Tamar

    2007-04-01

    Although in vitro selection technology is a versatile experimental tool for discovering novel synthetic RNA molecules, finding complex RNA molecules is difficult because most RNAs identified from random sequence pools are simple motifs, consistent with recent computational analysis of such sequence pools. Thus, enriching in vitro selection pools with complex structures could increase the probability of discovering novel RNAs. Here we develop an approach for engineering sequence pools that links RNA sequence space regions with corresponding structural distributions via a "mixing matrix" approach combined with a graph theory analysis. We define five classes of mixing matrices motivated by covariance mutations in RNA; these constructs define nucleotide transition rates and are applied to chosen starting sequences to yield specific nonrandom pools. We examine the coverage of sequence space as a function of the mixing matrix and starting sequence via clustering analysis. We show that, in contrast to random sequences, which are associated only with a local region of sequence space, our designed pools, including a structured pool for GTP aptamers, can target specific motifs. It follows that experimental synthesis of designed pools can benefit from using optimized starting sequences, mixing matrices, and pool fractions associated with each of our constructed pools as a guide. Automation of our approach could provide practical tools for pool design applications for in vitro selection of RNAs and related problems.

  5. Migration of lymphocytes on fibronectin-coated surfaces: temporal evolution of migratory parameters

    NASA Technical Reports Server (NTRS)

    Bergman, A. J.; Zygourakis, K.; McIntire, L. V. (Principal Investigator)

    1999-01-01

    Lymphocytes typically interact with implanted biomaterials through adsorbed exogenous proteins. To provide a more complete characterization of these interactions, analysis of lymphocyte migration on adsorbed extracellular matrix proteins must accompany the commonly performed adhesion studies. We report here a comparison of the migratory and adhesion behavior of Jurkat cells (a T lymphoblastoid cell line) on tissue culture treated and untreated polystyrene surfaces coated with various concentrations of fibronectin. The average speed of cell locomotion showed a biphasic response to substrate adhesiveness for cells migrating on untreated polystyrene and a monotonic decrease for cells migrating on tissue culture-treated polystyrene. A modified approach to the persistent random walk model was implemented to determine the time dependence of cell migration parameters. The random motility coefficient showed significant increases with time when cells migrated on tissue culture-treated polystyrene surfaces, while it remained relatively constant for experiments with untreated polystyrene plates. Finally, a cell migration computer model was developed to verify our modified persistent random walk analysis. Simulation results suggest that our experimental data were consistent with temporally increasing random motility coefficients.

  6. Recurrence of random walks with long-range steps generated by fractional Laplacian matrices on regular networks and simple cubic lattices

    NASA Astrophysics Data System (ADS)

    Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.

    2017-12-01

    We analyze a Markovian random walk strategy on undirected regular networks involving power matrix functions of the type L\\frac{α{2}} where L indicates a ‘simple’ Laplacian matrix. We refer to such walks as ‘fractional random walks’ with admissible interval 0<α ≤slant 2 . We deduce probability-generating functions (network Green’s functions) for the fractional random walk. From these analytical results we establish a generalization of Polya’s recurrence theorem for fractional random walks on d-dimensional infinite lattices: The fractional random walk is transient for dimensions d > α (recurrent for d≤slantα ) of the lattice. As a consequence, for 0<α< 1 the fractional random walk is transient for all lattice dimensions d=1, 2, .. and in the range 1≤slantα < 2 for dimensions d≥slant 2 . Finally, for α=2 , Polya’s classical recurrence theorem is recovered, namely the walk is transient only for lattice dimensions d≥slant 3 . The generalization of Polya’s recurrence theorem remains valid for the class of random walks with Lévy flight asymptotics for long-range steps. We also analyze the mean first passage probabilities, mean residence times, mean first passage times and global mean first passage times (Kemeny constant) for the fractional random walk. For an infinite 1D lattice (infinite ring) we obtain for the transient regime 0<α<1 closed form expressions for the fractional lattice Green’s function matrix containing the escape and ever passage probabilities. The ever passage probabilities (fractional lattice Green’s functions) in the transient regime fulfil Riesz potential power law decay asymptotic behavior for nodes far from the departure node. The non-locality of the fractional random walk is generated by the non-diagonality of the fractional Laplacian matrix with Lévy-type heavy tailed inverse power law decay for the probability of long-range moves. This non-local and asymptotic behavior of the fractional random walk introduces small-world properties with the emergence of Lévy flights on large (infinite) lattices.

  7. Finding a Hadamard matrix by simulated annealing of spin vectors

    NASA Astrophysics Data System (ADS)

    Bayu Suksmono, Andriyan

    2017-05-01

    Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.

  8. Evaluating Service Quality from Patients' Perceptions: Application of Importance-performance Analysis Method.

    PubMed

    Mohebifar, Rafat; Hasani, Hana; Barikani, Ameneh; Rafiei, Sima

    2016-08-01

    Providing high service quality is one of the main functions of health systems. Measuring service quality is the basic prerequisite for improving quality. The aim of this study was to evaluate the quality of service in teaching hospitals using importance-performance analysis matrix. A descriptive-analytic study was conducted through a cross-sectional method in six academic hospitals of Qazvin, Iran, in 2012. A total of 360 patients contributed to the study. The sampling technique was stratified random sampling. Required data were collected based on a standard questionnaire (SERVQUAL). Data analysis was done through SPSS version 18 statistical software and importance-performance analysis matrix. The results showed a significant gap between importance and performance in all five dimensions of service quality (p < 0.05). In reviewing the gap, "reliability" (2.36) and "assurance" (2.24) dimensions had the highest quality gap and "responsiveness" had the lowest gap (1.97). Also, according to findings, reliability and assurance were in Quadrant (I), empathy was in Quadrant (II), and tangibles and responsiveness were in Quadrant (IV) of the importance-performance matrix. The negative gap in all dimensions of quality shows that quality improvement is necessary in all dimensions. Using quality and diagnosis measurement instruments such as importance-performance analysis will help hospital managers with planning of service quality improvement and achieving long-term goals.

  9. Portfolio optimization and the random magnet problem

    NASA Astrophysics Data System (ADS)

    Rosenow, B.; Plerou, V.; Gopikrishnan, P.; Stanley, H. E.

    2002-08-01

    Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this "random magnet problem" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk.

  10. Fast Kalman Filter for Random Walk Forecast model

    NASA Astrophysics Data System (ADS)

    Saibaba, A.; Kitanidis, P. K.

    2013-12-01

    Kalman filtering is a fundamental tool in statistical time series analysis to understand the dynamics of large systems for which limited, noisy observations are available. However, standard implementations of the Kalman filter are prohibitive because they require O(N^2) in memory and O(N^3) in computational cost, where N is the dimension of the state variable. In this work, we focus our attention on the Random walk forecast model which assumes the state transition matrix to be the identity matrix. This model is frequently adopted when the data is acquired at a timescale that is faster than the dynamics of the state variables and there is considerable uncertainty as to the physics governing the state evolution. We derive an efficient representation for the a priori and a posteriori estimate covariance matrices as a weighted sum of two contributions - the process noise covariance matrix and a low rank term which contains eigenvectors from a generalized eigenvalue problem, which combines information from the noise covariance matrix and the data. We describe an efficient algorithm to update the weights of the above terms and the computation of eigenmodes of the generalized eigenvalue problem (GEP). The resulting algorithm for the Kalman filter with Random walk forecast model scales as O(N) or O(N log N), both in memory and computational cost. This opens up the possibility of real-time adaptive experimental design and optimal control in systems of much larger dimension than was previously feasible. For a small number of measurements (~ 300 - 400), this procedure can be made numerically exact. However, as the number of measurements increase, for several choices of measurement operators and noise covariance matrices, the spectrum of the (GEP) decays rapidly and we are justified in only retaining the dominant eigenmodes. We discuss tradeoffs between accuracy and computational cost. The resulting algorithms are applied to an example application from ray-based travel time tomography.

  11. Experimental and numerical analysis of the constitutive equation of rubber composites reinforced with random ceramic particle

    NASA Astrophysics Data System (ADS)

    Luo, D. M.; Xie, Y.; Su, X. R.; Zhou, Y. L.

    2018-01-01

    Based on the four classical models of Mooney-Rivlin (M-R), Yeoh, Ogden and Neo-Hookean (N-H) model, a strain energy constitutive equation with large deformation for rubber composites reinforced with random ceramic particles is proposed from the angle of continuum mechanics theory in this paper. By decoupling the interaction between matrix and random particles, the strain energy of each phase is obtained to derive the explicit constitutive equation for rubber composites. The tests results of uni-axial tensile, pure shear and equal bi-axial tensile are simulated by the non-linear finite element method on the ANSYS platform. The results from finite element method are compared with those from experiment, and the material parameters are determined by fitting the results from different test conditions, and the influence of radius of random ceramic particles on the effective mechanical properties are analyzed.

  12. Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick

    2018-05-01

    For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.

  13. Convergence to equilibrium under a random Hamiltonian.

    PubMed

    Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  14. Convergence to equilibrium under a random Hamiltonian

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  15. Intermediate quantum maps for quantum computation

    NASA Astrophysics Data System (ADS)

    Giraud, O.; Georgeot, B.

    2005-10-01

    We study quantum maps displaying spectral statistics intermediate between Poisson and Wigner-Dyson. It is shown that they can be simulated on a quantum computer with a small number of gates, and efficiently yield information about fidelity decay or spectral statistics. We study their matrix elements and entanglement production and show that they converge with time to distributions which differ from random matrix predictions. A randomized version of these maps can be implemented even more economically and yields pseudorandom operators with original properties, enabling, for example, one to produce fractal random vectors. These algorithms are within reach of present-day quantum computers.

  16. Scattering and transport statistics at the metal-insulator transition: A numerical study of the power-law banded random-matrix model

    NASA Astrophysics Data System (ADS)

    Méndez-Bermúdez, J. A.; Gopar, Victor A.; Varga, Imre

    2010-09-01

    We study numerically scattering and transport statistical properties of the one-dimensional Anderson model at the metal-insulator transition described by the power-law banded random matrix (PBRM) model at criticality. Within a scattering approach to electronic transport, we concentrate on the case of a small number of single-channel attached leads. We observe a smooth crossover from localized to delocalized behavior in the average-scattering matrix elements, the conductance probability distribution, the variance of the conductance, and the shot noise power by varying b (the effective bandwidth of the PBRM model) from small (b≪1) to large (b>1) values. We contrast our results with analytic random matrix theory predictions which are expected to be recovered in the limit b→∞ . We also compare our results for the PBRM model with those for the three-dimensional (3D) Anderson model at criticality, finding that the PBRM model with bɛ[0.2,0.4] reproduces well the scattering and transport properties of the 3D Anderson model.

  17. Column Subset Selection, Matrix Factorization, and Eigenvalue Optimization

    DTIC Science & Technology

    2008-07-01

    Pietsch and Grothendieck, which are regarded as basic instruments in modern functional analysis [Pis86]. • The methods for computing these... Pietsch factorization and the maxcut semi- definite program [GW95]. 1.2. Overview. We focus on the algorithmic version of the Kashin–Tzafriri theorem...will see that the desired subset is exposed by factoring the random submatrix. This factorization, which was invented by Pietsch , is regarded as a basic

  18. Observability of satellite launcher navigation with INS, GPS, attitude sensors and reference trajectory

    NASA Astrophysics Data System (ADS)

    Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René

    2018-01-01

    The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.

  19. Discrimination of healthy and osteoarthritic articular cartilages by Fourier transform infrared imaging and partial least squares-discriminant analysis

    PubMed Central

    Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang

    2015-01-01

    Abstract. Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens. PMID:26057029

  20. Discrimination of healthy and osteoarthritic articular cartilages by Fourier transform infrared imaging and partial least squares-discriminant analysis.

    PubMed

    Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang

    2015-06-01

    Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens.

  1. Network meta-analysis, electrical networks and graph theory.

    PubMed

    Rücker, Gerta

    2012-12-01

    Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Aspects géométriques et intégrables des modèles de matrices aléatoires

    NASA Astrophysics Data System (ADS)

    Marchal, Olivier

    2010-12-01

    This thesis deals with the geometric and integrable aspects associated with random matrix models. Its purpose is to provide various applications of random matrix theory, from algebraic geometry to partial differential equations of integrable systems. The variety of these applications shows why matrix models are important from a mathematical point of view. First, the thesis will focus on the study of the merging of two intervals of the eigenvalues density near a singular point. Specifically, we will show why this special limit gives universal equations from the Painlevé II hierarchy of integrable systems theory. Then, following the approach of (bi) orthogonal polynomials introduced by Mehta to compute partition functions, we will find Riemann-Hilbert and isomonodromic problems connected to matrix models, making the link with the theory of Jimbo, Miwa and Ueno. In particular, we will describe how the hermitian two-matrix models provide a degenerate case of Jimbo-Miwa-Ueno's theory that we will generalize in this context. Furthermore, the loop equations method, with its central notions of spectral curve and topological expansion, will lead to the symplectic invariants of algebraic geometry recently proposed by Eynard and Orantin. This last point will be generalized to the case of non-hermitian matrix models (arbitrary beta) paving the way to "quantum algebraic geometry" and to the generalization of symplectic invariants to "quantum curves". Finally, this set up will be applied to combinatorics in the context of topological string theory, with the explicit computation of an hermitian random matrix model enumerating the Gromov-Witten invariants of a toric Calabi-Yau threefold.

  3. WE-AB-207A-04: Random Undersampled Cone Beam CT: Theoretical Analysis and a Novel Reconstruction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, C; Chen, L; Jia, X

    2016-06-15

    Purpose: Reducing x-ray exposure and speeding up data acquisition motived studies on projection data undersampling. It is an important question that for a given undersampling ratio, what the optimal undersampling approach is. In this study, we propose a new undersampling scheme: random-ray undersampling. We will mathematically analyze its projection matrix properties and demonstrate its advantages. We will also propose a new reconstruction method that simultaneously performs CT image reconstruction and projection domain data restoration. Methods: By representing projection operator under the basis of singular vectors of full projection operator, matrix representations for an undersampling case can be generated and numericalmore » singular value decomposition can be performed. We compared properties of matrices among three undersampling approaches: regular-view undersampling, regular-ray undersampling, and the proposed random-ray undersampling. To accomplish CT reconstruction for random undersampling, we developed a novel method that iteratively performs CT reconstruction and missing projection data restoration via regularization approaches. Results: For a given undersampling ratio, random-ray undersampling preserved mathematical properties of full projection operator better than the other two approaches. This translates to advantages of reconstructing CT images at lower errors. Different types of image artifacts were observed depending on undersampling strategies, which were ascribed to the unique singular vectors of the sampling operators in the image domain. We tested the proposed reconstruction algorithm on a Forbid phantom with only 30% of the projection data randomly acquired. Reconstructed image error was reduced from 9.4% in a TV method to 7.6% in the proposed method. Conclusion: The proposed random-ray undersampling is mathematically advantageous over other typical undersampling approaches. It may permit better image reconstruction at the same undersampling ratio. The novel algorithm suitable for this random-ray undersampling was able to reconstruct high-quality images.« less

  4. A Meta-analysis of Studies Comparing Outcomes of Diverse Acellular Dermal Matrices for Implant-Based Breast Reconstruction.

    PubMed

    Lee, Kyeong-Tae; Mun, Goo-Hyun

    2017-07-01

    The current diversity of the available acellular dermal matrix (ADM) materials for implant-based breast reconstruction raises the issue of whether there are any differences in postoperative outcomes according to the kind of ADM used. The present meta-analysis aimed to investigate whether choice of ADM products can affect outcomes. Studies that used multiple kinds of ADM products for implant-based breast reconstruction and compared outcomes between them were searched. Outcomes of interest were rates of postoperative complications: infection, seroma, mastectomy flap necrosis, reconstruction failure, and overall complications. A total of 17 studies met the selection criteria. There was only 1 randomized controlled trial, and the other 16 studies had retrospective designs. Comparison of FlexHD, DermaMatrix, and ready-to-use AlloDerm with freeze-dried AlloDerm was conducted in multiple studies and could be meta-analyzed, in which 12 studies participated. In the meta-analysis comparing FlexHD and freeze-dried AlloDerm, using the results of 6 studies, both products showed similar pooled risks for all kinds of complications. When comparing DermaMatrix and freeze-dried AlloDerm with the results from 4 studies, there were also no differences between the pooled risks of complications of the two. Similarly, the meta-analysis of 4 studies comparing ready-to-use and freeze-dried AlloDerm demonstrated that the pooled risks for the complications did not differ. This meta-analysis demonstrates that the 3 recently invented, human cadaveric skin-based products of FlexHD, DermaMatrix, and ready-to-use AlloDerm have similar risks of complications compared with those of freeze-dried AlloDerm, which has been used for longer. However, as most studies had low levels of evidence, further investigations are needed.

  5. A matrix contraction process

    NASA Astrophysics Data System (ADS)

    Wilkinson, Michael; Grant, John

    2018-03-01

    We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \

  6. Derivatives of random matrix characteristic polynomials with applications to elliptic curves

    NASA Astrophysics Data System (ADS)

    Snaith, N. C.

    2005-12-01

    The value distribution of derivatives of characteristic polynomials of matrices from SO(N) is calculated at the point 1, the symmetry point on the unit circle of the eigenvalues of these matrices. We consider subsets of matrices from SO(N) that are constrained to have at least n eigenvalues equal to 1 and investigate the first non-zero derivative of the characteristic polynomial at that point. The connection between the values of random matrix characteristic polynomials and values of L-functions in families has been well established. The motivation for this work is the expectation that through this connection with L-functions derived from families of elliptic curves, and using the Birch and Swinnerton-Dyer conjecture to relate values of the L-functions to the rank of elliptic curves, random matrix theory will be useful in probing important questions concerning these ranks.

  7. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.

    2018-03-01

    In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.

  8. Pinning synchronization of delayed complex dynamical networks with nonlinear coupling

    NASA Astrophysics Data System (ADS)

    Cheng, Ranran; Peng, Mingshu; Yu, Weibin

    2014-11-01

    In this paper, we find that complex networks with the Watts-Strogatz or scale-free BA random topological architecture can be synchronized more easily by pin-controlling fewer nodes than regular systems. Theoretical analysis is included by means of Lyapunov functions and linear matrix inequalities (LMI) to make all nodes reach complete synchronization. Numerical examples are also provided to illustrate the importance of our theoretical analysis, which implies that there exists a gap between the theoretical prediction and numerical results about the minimum number of pinning controlled nodes.

  9. Energetic Consistency and Coupling of the Mean and Covariance Dynamics

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2008-01-01

    The dynamical state of the ocean and atmosphere is taken to be a large dimensional random vector in a range of large-scale computational applications, including data assimilation, ensemble prediction, sensitivity analysis, and predictability studies. In each of these applications, numerical evolution of the covariance matrix of the random state plays a central role, because this matrix is used to quantify uncertainty in the state of the dynamical system. Since atmospheric and ocean dynamics are nonlinear, there is no closed evolution equation for the covariance matrix, nor for the mean state. Therefore approximate evolution equations must be used. This article studies theoretical properties of the evolution equations for the mean state and covariance matrix that arise in the second-moment closure approximation (third- and higher-order moment discard). This approximation was introduced by EPSTEIN [1969] in an early effort to introduce a stochastic element into deterministic weather forecasting, and was studied further by FLEMING [1971a,b], EPSTEIN and PITCHER [1972], and PITCHER [1977], also in the context of atmospheric predictability. It has since fallen into disuse, with a simpler one being used in current large-scale applications. The theoretical results of this article make a case that this approximation should be reconsidered for use in large-scale applications, however, because the second moment closure equations possess a property of energetic consistency that the approximate equations now in common use do not possess. A number of properties of solutions of the second-moment closure equations that result from this energetic consistency will be established.

  10. The choice of prior distribution for a covariance matrix in multivariate meta-analysis: a simulation study.

    PubMed

    Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L

    2015-12-30

    Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing

    PubMed Central

    Matochko, Wadim L.; Derda, Ratmir

    2013-01-01

    Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N × 1 frequency vector n = ||ni||, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N × N matrix and a stochastic sampling operator (S a). The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of S a and use them to define the sequencing operator (S e q). Sequencing without any bias and errors is S e q = S a IN, where IN is a N × N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (C E N), which describes elimination or statistically significant downsampling, of specific reads during the sequencing process. PMID:24416071

  13. Time series, correlation matrices and random matrix models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinayak; Seligman, Thomas H.

    2014-01-08

    In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less

  14. A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gittens, Alex; Kottalam, Jey; Yang, Jiyan

    We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with themore » fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.« less

  15. Improved analysis of SP and CoSaMP under total perturbations

    NASA Astrophysics Data System (ADS)

    Li, Haifeng

    2016-12-01

    Practically, in the underdetermined model y= A x, where x is a K sparse vector (i.e., it has no more than K nonzero entries), both y and A could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector x is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.

  16. A stochastic Markov chain model to describe lung cancer growth and metastasis.

    PubMed

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter

    2012-01-01

    A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model.

  17. Localized motion in random matrix decomposition of complex financial systems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian

    2017-04-01

    With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.

  18. Statistical analysis for improving data precision in the SPME GC-MS analysis of blackberry (Rubus ulmifolius Schott) volatiles.

    PubMed

    D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C

    2014-07-01

    Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Singular-value demodulation of phase-shifted holograms.

    PubMed

    Lopes, Fernando; Atlan, Michael

    2015-06-01

    We report on phase-shifted holographic interferogram demodulation by singular-value decomposition. Numerical processing of optically acquired interferograms over several modulation periods was performed in two steps: (1) rendering of off-axis complex-valued holograms by Fresnel transformation of the interferograms; and (2) eigenvalue spectrum assessment of the lag-covariance matrix of hologram pixels. Experimental results in low-light recording conditions were compared with demodulation by Fourier analysis, in the presence of random phase drifts.

  20. [Effect of overdose fluoride on expression of bone sialoprotein in developing dental tissues of rats].

    PubMed

    Xu, Zhi-ling; Wang, Qiang; Liu, Tian-lin; Guo, Li-ying; Jing, Feng-qiu; Liu, Hui

    2006-04-01

    To investigate the changes of bone sialoprotein (BSP) in developing dental tissues of rats exposed to fluoride. Twenty rats were randomly divided into two groups, one was with distilled water (control group), the other was with distilled water treated by fluoride (experimental group). When the fluorosis model was established, the changes of the expression of BSP were investigated and compared between the two groups. HE staining was used to observe the morphology of the cell, and immunohistochemisty assay was used to determine the expression of BSP in rat incisor. Student's t test was used for statistical analysis. The ameloblasts had normal morphology and arranged orderly. Immunoreactivitis of BSP was present in matured ameloblasts, dentinoblasts, cementoblasts, and the matrix in the control group. But in the experimental group the ameloblasts arranged in multiple layers, the enamel matrix was confused and the expression of BSP was significantly lower than that of the control group. Statistical analysis showed significant differences between the two groups (P<0.01). Fluoride can inhibit the expression of BSP in developing dental tissues of rats, and then inhibit differentiation of the tooth epithelial cells and secretion of matrix. This is a probable intracellular mechanism of dental fluorosis.

  1. Structure of a financial cross-correlation matrix under attack

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Kim, Junghwan; Kim, Pyungsoo; Kang, Yoonjong; Park, Sanghoon; Park, Inho; Park, Sang-Bum; Kim, Kyungsik

    2009-09-01

    We investigate the structure of a perturbed stock market in terms of correlation matrices. For the purpose of perturbing a stock market, two distinct methods are used, namely local and global perturbation. The former involves replacing a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series while the latter reconstructs the cross-correlation matrix just after replacing the original return series with Gaussian-distributed time series. Concerning the local case, it is a technical study only and there is no attempt to model reality. The term ‘global’ means the overall effect of the replacement on other untouched returns. Through statistical analyses such as random matrix theory (RMT), network theory, and the correlation coefficient distributions, we show that the global structure of a stock market is vulnerable to perturbation. However, apart from in the analysis of inverse participation ratios (IPRs), the vulnerability becomes dull under a small-scale perturbation. This means that these analysis tools are inappropriate for monitoring the whole stock market due to the low sensitivity of a stock market to a small-scale perturbation. In contrast, when going down to the structure of business sectors, we confirm that correlation-based business sectors are regrouped in terms of IPRs. This result gives a clue about monitoring the effect of hidden intentions, which are revealed via portfolios taken mostly by large investors.

  2. Genomic relations among 31 species of Mammillaria haworth (Cactaceae) using random amplified polymorphic DNA.

    PubMed

    Mattagajasingh, Ilwola; Mukherjee, Arup Kumar; Das, Premananda

    2006-01-01

    Thirty-one species of Mammillaria were selected to study the molecular phylogeny using random amplified polymorphic DNA (RAPD) markers. High amount of mucilage (gelling polysaccharides) present in Mammillaria was a major obstacle in isolating good quality genomic DNA. The CTAB (cetyl trimethyl ammonium bromide) method was modified to obtain good quality genomic DNA. Twenty-two random decamer primers resulted in 621 bands, all of which were polymorphic. The similarity matrix value varied from 0.109 to 0.622 indicating wide variability among the studied species. The dendrogram obtained from the unweighted pair group method using arithmetic averages (UPGMA) analysis revealed that some of the species did not follow the conventional classification. The present work shows the usefulness of RAPD markers for genetic characterization to establish phylogenetic relations among Mammillaria species.

  3. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  4. Semiclassical matrix model for quantum chaotic transport with time-reversal symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel, E-mail: marcel.novaes@gmail.com

    2015-10-15

    We show that the semiclassical approach to chaotic quantum transport in the presence of time-reversal symmetry can be described by a matrix model. In other words, we construct a matrix integral whose perturbative expansion satisfies the semiclassical diagrammatic rules for the calculation of transport statistics. One of the virtues of this approach is that it leads very naturally to the semiclassical derivation of universal predictions from random matrix theory.

  5. On the design and analysis of clinical trials with correlated outcomes

    PubMed Central

    Follmann, Dean; Proschan, Michael

    2014-01-01

    SUMMARY The convention in clinical trials is to regard outcomes as independently distributed, but in some situations they may be correlated. For example, in infectious diseases, correlation may be induced if participants have contact with a common infectious source, or share hygienic tips that prevent infection. This paper discusses the design and analysis of randomized clinical trials that allow arbitrary correlation among all randomized volunteers. This perspective generalizes the traditional perspective of strata, where patients are exchangeable within strata, and independent across strata. For theoretical work, we focus on the test of no treatment effect μ1 − μ0 = 0 when the n dimensional vector of outcomes follows a Gaussian distribution with known n × n covariance matrix Σ, where the half randomized to treatment (placebo) have mean response μ1 (μ0). We show how the new test corresponds to familiar tests in simple situations for independent, exchangeable, paired, and clustered data. We also discuss the design of trials where Σ is known before or during randomization of patients and evaluate randomization schemes based on such knowledge. We provide two complex examples to illustrate the method, one for a study of 23 family clusters with cardiomyopathy, the other where the malaria attack rates vary within households and clusters of households in a Malian village. PMID:25111420

  6. Asymptotic analysis of the density of states in random matrix models associated with a slowly decaying weight

    NASA Astrophysics Data System (ADS)

    Kuijlaars, A. B. J.

    2001-08-01

    The asymptotic behavior of polynomials that are orthogonal with respect to a slowly decaying weight is very different from the asymptotic behavior of polynomials that are orthogonal with respect to a Freud-type weight. While the latter has been extensively studied, much less is known about the former. Following an earlier investigation into the zero behavior, we study here the asymptotics of the density of states in a unitary ensemble of random matrices with a slowly decaying weight. This measure is also naturally connected with the orthogonal polynomials. It is shown that, after suitable rescaling, the weak limit is the same as the weak limit of the rescaled zeros.

  7. Stability and dynamical properties of material flow systems on random networks

    NASA Astrophysics Data System (ADS)

    Anand, K.; Galla, T.

    2009-04-01

    The theory of complex networks and of disordered systems is used to study the stability and dynamical properties of a simple model of material flow networks defined on random graphs. In particular we address instabilities that are characteristic of flow networks in economic, ecological and biological systems. Based on results from random matrix theory, we work out the phase diagram of such systems defined on extensively connected random graphs, and study in detail how the choice of control policies and the network structure affects stability. We also present results for more complex topologies of the underlying graph, focussing on finitely connected Erdös-Réyni graphs, Small-World Networks and Barabási-Albert scale-free networks. Results indicate that variability of input-output matrix elements, and random structures of the underlying graph tend to make the system less stable, while fast price dynamics or strong responsiveness to stock accumulation promote stability.

  8. Tensor Minkowski Functionals for random fields on the sphere

    NASA Astrophysics Data System (ADS)

    Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom

    2017-12-01

    We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.

  9. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  10. Probabilisitc Geobiological Classification Using Elemental Abundance Distributions and Lossless Image Compression in Recent and Modern Organisms

    NASA Technical Reports Server (NTRS)

    Storrie-Lombardi, Michael C.; Hoover, Richard B.

    2005-01-01

    Last year we presented techniques for the detection of fossils during robotic missions to Mars using both structural and chemical signatures[Storrie-Lombardi and Hoover, 2004]. Analyses included lossless compression of photographic images to estimate the relative complexity of a putative fossil compared to the rock matrix [Corsetti and Storrie-Lombardi, 2003] and elemental abundance distributions to provide mineralogical classification of the rock matrix [Storrie-Lombardi and Fisk, 2004]. We presented a classification strategy employing two exploratory classification algorithms (Principal Component Analysis and Hierarchical Cluster Analysis) and non-linear stochastic neural network to produce a Bayesian estimate of classification accuracy. We now present an extension of our previous experiments exploring putative fossil forms morphologically resembling cyanobacteria discovered in the Orgueil meteorite. Elemental abundances (C6, N7, O8, Na11, Mg12, Ai13, Si14, P15, S16, Cl17, K19, Ca20, Fe26) obtained for both extant cyanobacteria and fossil trilobites produce signatures readily distinguishing them from meteorite targets. When compared to elemental abundance signatures for extant cyanobacteria Orgueil structures exhibit decreased abundances for C6, N7, Na11, All3, P15, Cl17, K19, Ca20 and increases in Mg12, S16, Fe26. Diatoms and silicified portions of cyanobacterial sheaths exhibiting high levels of silicon and correspondingly low levels of carbon cluster more closely with terrestrial fossils than with extant cyanobacteria. Compression indices verify that variations in random and redundant textural patterns between perceived forms and the background matrix contribute significantly to morphological visual identification. The results provide a quantitative probabilistic methodology for discriminating putatitive fossils from the surrounding rock matrix and &om extant organisms using both structural and chemical information. The techniques described appear applicable to the geobiological analysis of meteoritic samples or in situ exploration of the Mars regolith. Keywords: cyanobacteria, microfossils, Mars, elemental abundances, complexity analysis, multifactor analysis, principal component analysis, hierarchical cluster analysis, artificial neural networks, paleo-biosignatures

  11. Molecularly Imprinted Sol-Gel-Based QCM Sensor Arrays for the Detection and Recognition of Volatile Aldehydes.

    PubMed

    Liu, Chuanjun; Wyszynski, Bartosz; Yatabe, Rui; Hayashi, Kenshi; Toko, Kiyoshi

    2017-02-16

    The detection and recognition of metabolically derived aldehydes, which have been identified as important products of oxidative stress and biomarkers of cancers; are considered as an effective approach for early cancer detection as well as health status monitoring. Quartz crystal microbalance (QCM) sensor arrays based on molecularly imprinted sol-gel (MISG) materials were developed in this work for highly sensitive detection and highly selective recognition of typical aldehyde vapors including hexanal (HAL); nonanal (NAL) and bezaldehyde (BAL). The MISGs were prepared by a sol-gel procedure using two matrix precursors: tetraethyl orthosilicate (TEOS) and tetrabutoxytitanium (TBOT). Aminopropyltriethoxysilane (APT); diethylaminopropyltrimethoxysilane (EAP) and trimethoxy-phenylsilane (TMP) were added as functional monomers to adjust the imprinting effect of the matrix. Hexanoic acid (HA); nonanoic acid (NA) and benzoic acid (BA) were used as psuedotemplates in view of their analogous structure to the target molecules as well as the strong hydrogen-bonding interaction with the matrix. Totally 13 types of MISGs with different components were prepared and coated on QCM electrodes by spin coating. Their sensing characters towards the three aldehyde vapors with different concentrations were investigated qualitatively. The results demonstrated that the response of individual sensors to each target strongly depended on the matrix precursors; functional monomers and template molecules. An optimization of the 13 MISG materials was carried out based on statistical analysis such as principle component analysis (PCA); multivariate analysis of covariance (MANCOVA) and hierarchical cluster analysis (HCA). The optimized sensor array consisting of five channels showed a high discrimination ability on the aldehyde vapors; which was confirmed by quantitative comparison with a randomly selected array. It was suggested that both the molecularly imprinting (MIP) effect and the matrix effect contributed to the sensitivity and selectivity of the optimized sensor array. The developed MISGs were expected to be promising materials for the detection and recognition of volatile aldehydes contained in exhaled breath or human body odor.

  12. Molecularly Imprinted Sol-Gel-Based QCM Sensor Arrays for the Detection and Recognition of Volatile Aldehydes

    PubMed Central

    Liu, Chuanjun; Wyszynski, Bartosz; Yatabe, Rui; Hayashi, Kenshi; Toko, Kiyoshi

    2017-01-01

    The detection and recognition of metabolically derived aldehydes, which have been identified as important products of oxidative stress and biomarkers of cancers; are considered as an effective approach for early cancer detection as well as health status monitoring. Quartz crystal microbalance (QCM) sensor arrays based on molecularly imprinted sol-gel (MISG) materials were developed in this work for highly sensitive detection and highly selective recognition of typical aldehyde vapors including hexanal (HAL); nonanal (NAL) and bezaldehyde (BAL). The MISGs were prepared by a sol-gel procedure using two matrix precursors: tetraethyl orthosilicate (TEOS) and tetrabutoxytitanium (TBOT). Aminopropyltriethoxysilane (APT); diethylaminopropyltrimethoxysilane (EAP) and trimethoxy-phenylsilane (TMP) were added as functional monomers to adjust the imprinting effect of the matrix. Hexanoic acid (HA); nonanoic acid (NA) and benzoic acid (BA) were used as psuedotemplates in view of their analogous structure to the target molecules as well as the strong hydrogen-bonding interaction with the matrix. Totally 13 types of MISGs with different components were prepared and coated on QCM electrodes by spin coating. Their sensing characters towards the three aldehyde vapors with different concentrations were investigated qualitatively. The results demonstrated that the response of individual sensors to each target strongly depended on the matrix precursors; functional monomers and template molecules. An optimization of the 13 MISG materials was carried out based on statistical analysis such as principle component analysis (PCA); multivariate analysis of covariance (MANCOVA) and hierarchical cluster analysis (HCA). The optimized sensor array consisting of five channels showed a high discrimination ability on the aldehyde vapors; which was confirmed by quantitative comparison with a randomly selected array. It was suggested that both the molecularly imprinting (MIP) effect and the matrix effect contributed to the sensitivity and selectivity of the optimized sensor array. The developed MISGs were expected to be promising materials for the detection and recognition of volatile aldehydes contained in exhaled breath or human body odor. PMID:28212347

  13. Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers

    NASA Astrophysics Data System (ADS)

    Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos

    2017-01-01

    We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.

  14. A low-rank matrix recovery approach for energy efficient EEG acquisition for a wireless body area network.

    PubMed

    Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab

    2014-08-25

    We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.

  15. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  16. Network trending; leadership, followership and neutrality among companies: A random matrix approach

    NASA Astrophysics Data System (ADS)

    Mobarhan, N. S. Safavi; Saeedi, A.; Roodposhti, F. Rahnamay; Jafari, G. R.

    2016-11-01

    In this article, we analyze the cross-correlation between returns of different stocks to answer the following important questions. The first one is: If there exists collective behavior in a financial market, how could we detect it? And the second question is: Is there a particular company among the companies of a market as the leader of the collective behavior? Or is there no specified leadership governing the system similar to some complex systems? We use the method of random matrix theory to answer the mentioned questions. Cross-correlation matrix of index returns of four different markets is analyzed. The participation ratio quantity related to each matrices' eigenvectors and the eigenvalue spectrum is calculated. We introduce shuffled-matrix created of cross correlation matrix in such a way that the elements of the later one are displaced randomly. Comparing the participation ratio quantities obtained from a correlation matrix of a market and its related shuffled-one, on the bulk distribution region of the eigenvalues, we detect a meaningful deviation between the mentioned quantities indicating the collective behavior of the companies forming the market. By calculating the relative deviation of participation ratios, we obtain a measure to compare the markets according to their collective behavior. Answering the second question, we show there are three groups of companies: The first group having higher impact on the market trend called leaders, the second group is followers and the third one is the companies who have not a considerable role in the trend. The results can be utilized in portfolio construction.

  17. A Deep Stochastic Model for Detecting Community in Complex Networks

    NASA Astrophysics Data System (ADS)

    Fu, Jingcheng; Wu, Jianliang

    2017-01-01

    Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.

  18. A Distributed-Memory Package for Dense Hierarchically Semi-Separable Matrix Computations Using Randomization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter

    In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less

  19. A Distributed-Memory Package for Dense Hierarchically Semi-Separable Matrix Computations Using Randomization

    DOE PAGES

    Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...

    2016-06-30

    In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less

  20. RANDOM MATRIX DIAGONALIZATION--A COMPUTER PROGRAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuchel, K.; Greibach, R.J.; Porter, C.E.

    A computer prograra is described which generates random matrices, diagonalizes them and sorts appropriately the resulting eigenvalues and eigenvector components. FAP and FORTRAN listings for the IBM 7090 computer are included. (auth)

  1. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE PAGES

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...

    2018-03-06

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  2. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  3. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  4. Diagnostic analysis of liver B ultrasonic texture features based on LM neural network

    NASA Astrophysics Data System (ADS)

    Chi, Qingyun; Hua, Hu; Liu, Menglin; Jiang, Xiuying

    2017-03-01

    In this study, B ultrasound images of 124 benign and malignant patients were randomly selected as the study objects. The B ultrasound images of the liver were treated by enhanced de-noising. By constructing the gray level co-occurrence matrix which reflects the information of each angle, Principal Component Analysis of 22 texture features were extracted and combined with LM neural network for diagnosis and classification. Experimental results show that this method is a rapid and effective diagnostic method for liver imaging, which provides a quantitative basis for clinical diagnosis of liver diseases.

  5. On efficient randomized algorithms for finding the PageRank vector

    NASA Astrophysics Data System (ADS)

    Gasnikov, A. V.; Dmitriev, D. Yu.

    2015-03-01

    Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.

  6. Entanglement spectrum of random-singlet quantum critical points

    NASA Astrophysics Data System (ADS)

    Fagotti, Maurizio; Calabrese, Pasquale; Moore, Joel E.

    2011-01-01

    The entanglement spectrum (i.e., the full distribution of Schmidt eigenvalues of the reduced density matrix) contains more information than the conventional entanglement entropy and has been studied recently in several many-particle systems. We compute the disorder-averaged entanglement spectrum in the form of the disorder-averaged moments TrρAα̲ of the reduced density matrix ρA for a contiguous block of many spins at the random-singlet quantum critical point in one dimension. The result compares well in the scaling limit with numerical studies on the random XX model and is also expected to describe the (interacting) random Heisenberg model. Our numerical studies on the XX case reveal that the dependence of the entanglement entropy and spectrum on the geometry of the Hilbert space partition is quite different than for conformally invariant critical points.

  7. Exploring multicollinearity using a random matrix theory approach.

    PubMed

    Feher, Kristen; Whelan, James; Müller, Samuel

    2012-01-01

    Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.

  8. Minocycline and matrix metalloproteinase inhibition in acute intracerebral hemorrhage: a pilot study.

    PubMed

    Chang, J J; Kim-Tenser, M; Emanuel, B A; Jones, G M; Chapple, K; Alikhani, A; Sanossian, N; Mack, W J; Tsivgoulis, G; Alexandrov, A V; Pourmotabbed, T

    2017-11-01

    Intracerebral hemorrhage (ICH) is a devastating cerebrovascular disorder with high morbidity and mortality. Minocycline is a matrix metalloproteinase-9 (MMP-9) inhibitor that may attenuate secondary mechanisms of injury in ICH. The feasibility and safety of minocycline in ICH patients were evaluated in a pilot, double-blinded, placebo-controlled randomized clinical trial. Patients with acute onset (<12 h from symptom onset) ICH and small initial hematoma volume (<30 ml) were randomized to high-dose (10 mg/kg) intravenous minocycline or placebo. The outcome events included adverse events, change in serial National Institutes of Health Stroke Scale score assessments, hematoma volume and MMP-9 measurements, 3-month functional outcome (modified Rankin score) and mortality. A total of 20 patients were randomized to minocycline (n = 10) or placebo (n = 10). The two groups did not differ in terms of baseline characteristics. No serious adverse events or complications were noted with minocycline infusion. The two groups did not differ in any of the clinical and radiological outcomes. Day 5 serum MMP-9 levels tended to be lower in the minocycline group (372 ± 216 ng/ml vs. 472 ± 235 ng/ml; P = 0.052). Multiple linear regression analysis showed that minocycline was associated with a 217.65 (95% confidence interval -425.21 to -10.10, P = 0.041) decrease in MMP-9 levels between days 1 and 5. High-dose intravenous minocycline can be safely administered to patients with ICH. Larger randomized clinical trials evaluating the efficacy of minocycline and MMP-9 inhibition in ICH patients are required. © 2017 EAN.

  9. Evaluation of different rotary devices on bone repair in rabbits.

    PubMed

    Ribeiro Junior, Paulo Domingos; Barleto, Christiane Vespasiano; Ribeiro, Daniel Araki; Matsumoto, Mariza Akemi

    2007-01-01

    In oral surgery, the quality of bone repair may be influenced by several factors that can increase the morbidity of the procedure. The type of equipment used for ostectomy can directly affect bone healing. The aim of this study was to evaluate bone repair of mandible bone defects prepared in rabbits using three different rotary devices. Fifteen New Zealand rabbits were randomly assigned to 3 groups (n=5) according to type of rotary device used to create bone defects: I--pneumatic low-speed rotation engine, II--pneumatic high-speed rotation engine, and III--electric low-speed rotation engine. The anatomic pieces were surgically obtained after 2, 7 and 30 days and submitted to histological and morphometric analysis. The morphometric results were expressed as the total area of bone remodeling matrix using an image analysis system. Increases in the bone remodeling matrix were noticed with time along the course of the experiment. No statistically significant differences (p>0.05) were observed among the groups at the three sacrificing time points considering the total area of bone mineralized matrix, although the histological analysis showed a slightly advanced bone repair in group III compared to the other two groups. The findings of the present study suggest that the type of rotary device used in oral and maxillofacial surgery does not interfere with the bone repair process.

  10. Communication Optimal Parallel Multiplication of Sparse Random Matrices

    DTIC Science & Technology

    2013-02-21

    Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That

  11. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  12. Generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    1997-05-01

    The feasibility of using a generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures has been examined. Using this method, the problem is reduced to solution of simpler special averaged problems for composites with single inclusions and corresponding transition layers in the medium examined. The dimensions of the transition layers are defined by correlation radii of the composite random structure of the composite, while the heterogeneous elastic properties of the transition layers take account of the probabilities for variation of the size and configuration of the inclusions using averaged special indicator functions. Results are given for a numerical calculation of the averaged indicator functions and analysis of the effect of the micropores in the matrix-fiber interface region on the effective elastic properties of unidirectional fiberglass—epoxy using the generalized self-consistent method and compared with experimental data and reported solutions.

  13. Multivariate meta-analysis with an increasing number of parameters

    PubMed Central

    Boca, Simina M.; Pfeiffer, Ruth M.; Sampson, Joshua N.

    2017-01-01

    Summary Meta-analysis can average estimates of multiple parameters, such as a treatment’s effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between study variability, the loss of efficiency due to choosing random effects MVMA over fixed-effect MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for Non-Hodgkin Lymphoma. PMID:28195655

  14. Matching 4.7-Å XRD Spacing in Amelogenin Nanoribbons and Enamel Matrix

    PubMed Central

    Sanii, B.; Martinez-Avila, O.; Simpliciano, C.; Zuckermann, R.N.; Habelitz, S.

    2014-01-01

    The recent discovery of conditions that induce nanoribbon structures of amelogenin protein in vitro raises questions about their role in enamel formation. Nanoribbons of recombinant human full-length amelogenin (rH174) are about 17 nm wide and self-align into parallel bundles; thus, they could act as templates for crystallization of nanofibrous apatite comprising dental enamel. Here we analyzed the secondary structures of nanoribbon amelogenin by x-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR) and tested if the structural motif matches previous data on the organic matrix of enamel. XRD analysis showed that a peak corresponding to 4.7 Å is present in nanoribbons of amelogenin. In addition, FTIR analysis showed that amelogenin in the form of nanoribbons was comprised of β-sheets by up to 75%, while amelogenin nanospheres had predominantly random-coil structure. The observation of a 4.7-Å XRD spacing confirms the presence of β-sheets and illustrates structural parallels between the in vitro assemblies and structural motifs in developing enamel. PMID:25048248

  15. STS-1 operational flight profile. Volume 5: Descent cycle 3. Appendix D: GRTLS six degree of freedom Monte Carlo dispersion analysis

    NASA Technical Reports Server (NTRS)

    Montez, M. N.

    1980-01-01

    The results of a six degree of freedom (6-DOF) nonlinear Monte Carlo dispersion analysis for the latest glide return to landing site (GRTLS) abort trajectory for the Space Transportation System 1 Flight are presented. For this GRTLS, the number two main engine fails at 262.5 seconds ground elapsed time. Fifty randomly selected simulations, initialized at external tank separation, are analyzed. The initial covariance matrix is a 20 x 20 matrix and includes navigation errors and dispersions in position and velocity, time, accelerometer bias, and inertial platform misalinements. In all 50 samples, speedbrake, rudder, elevon, and body flap hinge moments are acceptable. Transitions to autoland begin before 9,000 feet and there are no tailscrapes. Navigation derived dynamic pressure accuracies exceed the flight control system constraints above Mach 2.5. Three out of 50 landings exceeded tire specification limit speed of 222 knots. Pilot manual landings are expected to reduce landing speed by landing farther downrange.

  16. Tensor Decompositions for Learning Latent Variable Models

    DTIC Science & Technology

    2012-12-08

    and eigenvectors of tensors is generally significantly more complicated than their matrix counterpart (both algebraically [Qi05, CS11, Lim05] and...The reduction First, let W ∈ Rd×k be a linear transformation such that M2(W,W ) = W M2W = I where I is the k × k identity matrix (i.e., W whitens ...approximate the whitening matrix W ∈ Rd×k from second-moment matrix M2 ∈ Rd×d. To do this, one first multiplies M2 by a random matrix R ∈ Rd×k′ for some k′ ≥ k

  17. Structure of local interactions in complex financial dynamics

    PubMed Central

    Jiang, X. F.; Chen, T. T.; Zheng, B.

    2014-01-01

    With the network methods and random matrix theory, we investigate the interaction structure of communities in financial markets. In particular, based on the random matrix decomposition, we clarify that the local interactions between the business sectors (subsectors) are mainly contained in the sector mode. In the sector mode, the average correlation inside the sectors is positive, while that between the sectors is negative. Further, we explore the time evolution of the interaction structure of the business sectors, and observe that the local interaction structure changes dramatically during a financial bubble or crisis. PMID:24936906

  18. Random matrix theory filters and currency portfolio optimisation

    NASA Astrophysics Data System (ADS)

    Daly, J.; Crane, M.; Ruskin, H. J.

    2010-04-01

    Random matrix theory (RMT) filters have recently been shown to improve the optimisation of financial portfolios. This paper studies the effect of three RMT filters on realised portfolio risk, using bootstrap analysis and out-of-sample testing. We considered the case of a foreign exchange and commodity portfolio, weighted towards foreign exchange, and consisting of 39 assets. This was intended to test the limits of RMT filtering, which is more obviously applicable to portfolios with larger numbers of assets. We considered both equally and exponentially weighted covariance matrices, and observed that, despite the small number of assets involved, RMT filters reduced risk in a way that was consistent with a much larger S&P 500 portfolio. The exponential weightings indicated showed good consistency with the value suggested by Riskmetrics, in contrast to previous results involving stocks. This decay factor, along with the low number of past moves preferred in the filtered, equally weighted case, displayed a trend towards models which were reactive to recent market changes. On testing portfolios with fewer assets, RMT filtering provided less or no overall risk reduction. In particular, no long term out-of-sample risk reduction was observed for a portfolio consisting of 15 major currencies and commodities.

  19. Copolymers For Capillary Gel Electrophoresis

    DOEpatents

    Liu, Changsheng; Li, Qingbo

    2005-08-09

    This invention relates to an electrophoresis separation medium having a gel matrix of at least one random, linear copolymer comprising a primary comonomer and at least one secondary comonomer, wherein the comonomers are randomly distributed along the copolymer chain. The primary comonomer is an acrylamide or an acrylamide derivative that provides the primary physical, chemical, and sieving properties of the gel matrix. The at least one secondary comonomer imparts an inherent physical, chemical, or sieving property to the copolymer chain. The primary and secondary comonomers are present in a ratio sufficient to induce desired properties that optimize electrophoresis performance. The invention also relates to a method of separating a mixture of biological molecules using this gel matrix, a method of preparing the novel electrophoresis separation medium, and a capillary tube filled with the electrophoresis separation medium.

  20. Simple Emergent Power Spectra from Complex Inflationary Physics

    NASA Astrophysics Data System (ADS)

    Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David

    2016-09-01

    We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.

  1. Simple Emergent Power Spectra from Complex Inflationary Physics.

    PubMed

    Dias, Mafalda; Frazer, Jonathan; Marsh, M C David

    2016-09-30

    We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.

  2. Parametric number covariance in quantum chaotic spectra.

    PubMed

    Vinayak; Kumar, Sandeep; Pandey, Akhilesh

    2016-03-01

    We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.

  3. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  4. Partial Removal of Nail Matrix in the Treatment of Ingrown Nails: Prospective Randomized Control Study Between Curettage and Electrocauterization.

    PubMed

    Kim, Maru; Song, In-Guk; Kim, Hyung Jin

    2015-06-01

    The aim of this study was to compare the result of electrocauterization and curettage, which can be done with basic instruments. Patients with ingrown nail were randomized to 2 groups. In the first group, nail matrix was removed by curettage, and the second group, nail matrix was removed by electrocautery. A total of 61 patients were enrolled; 32 patients were operated by curettage, and 29 patients were operated by electrocautery. Wound infections, as early complication, were found in 15.6% (5/32) of the curettage group, 10.3% (3/29) of the electrocautery group patients each (P = .710). Nonrecurrence was observed in 93.8% (30/32) and 86.2% (25/29) of the curettage and electrocautery groups, respectively, (lower limit of 1-sided 90% confidence interval = -2.3% > -15% [noninferiority margin]). To remove nail matrix, the curettage is effective as well as the electrocauterization. Further study is required to determine the differences between the procedures. © The Author(s) 2014.

  5. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  6. Enhancement of cooperation in the spatial prisoner's dilemma with a coherence-resonance effect through annealed randomness at a cooperator-defector boundary; comparison of two variant models

    NASA Astrophysics Data System (ADS)

    Tanimoto, Jun

    2016-11-01

    Inspired by the commonly observed real-world fact that people tend to behave in a somewhat random manner after facing interim equilibrium to break a stalemate situation whilst seeking a higher output, we established two models of the spatial prisoner's dilemma. One presumes that an agent commits action errors, while the other assumes that an agent refers to a payoff matrix with an added random noise instead of an original payoff matrix. A numerical simulation revealed that mechanisms based on the annealing of randomness due to either the action error or the payoff noise could significantly enhance the cooperation fraction. In this study, we explain the detailed enhancement mechanism behind the two models by referring to the concepts that we previously presented with respect to evolutionary dynamic processes under the names of enduring and expanding periods.

  7. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  8. Distributed Matrix Completion: Application to Cooperative Positioning in Noisy Environments

    DTIC Science & Technology

    2013-12-11

    positioning, and a gossip version of low-rank approximation were developed. A convex relaxation for positioning in the presence of noise was shown to...of a large data matrix through gossip algorithms. A new algorithm is proposed that amounts to iteratively multiplying a vector by independent random...sparsification of the original matrix and averaging the resulting normalized vectors. This can be viewed as a generalization of gossip algorithms for

  9. Forecasting extinction risk with nonstationary matrix models.

    PubMed

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-02-01

    Matrix population growth models are standard tools for forecasting population change and for managing rare species, but they are less useful for predicting extinction risk in the face of changing environmental conditions. Deterministic models provide point estimates of lambda, the finite rate of increase, as well as measures of matrix sensitivity and elasticity. Stationary matrix models can be used to estimate extinction risk in a variable environment, but they assume that the matrix elements are randomly sampled from a stationary (i.e., non-changing) distribution. Here we outline a method for using nonstationary matrix models to construct realistic forecasts of population fluctuation in changing environments. Our method requires three pieces of data: (1) field estimates of transition matrix elements, (2) experimental data on the demographic responses of populations to altered environmental conditions, and (3) forecasting data on environmental drivers. These three pieces of data are combined to generate a series of sequential transition matrices that emulate a pattern of long-term change in environmental drivers. Realistic estimates of population persistence and extinction risk can be derived from stochastic permutations of such a model. We illustrate the steps of this analysis with data from two populations of Sarracenia purpurea growing in northern New England. Sarracenia purpurea is a perennial carnivorous plant that is potentially at risk of local extinction because of increased nitrogen deposition. Long-term monitoring records or models of environmental change can be used to generate time series of driver variables under different scenarios of changing environments. Both manipulative and natural experiments can be used to construct a linking function that describes how matrix parameters change as a function of the environmental driver. This synthetic modeling approach provides quantitative estimates of extinction probability that have an explicit mechanistic basis.

  10. Guaifenesin stone matrix proteomics: a protocol for identifying proteins critical to stone formation.

    PubMed

    Kolbach-Mandel, A M; Mandel, N S; Cohen, S R; Kleinman, J G; Ahmed, F; Mandel, I C; Wesson, J A

    2017-04-01

    Drug-related kidney stones are a diagnostic problem, since they contain a large matrix (protein) fraction and are frequently incorrectly identified as matrix stones. A urine proteomics study patient produced a guaifenesin stone during her participation, allowing us to both correctly diagnose her disease and identify proteins critical to this drug stone-forming process. The patient provided three random midday urine samples for proteomics studies; one of which contained stone-like sediment with two distinct fractions. These solids were characterized with optical microscopy and Fourier transform infrared spectroscopy. Immunoblotting and quantitative mass spectrometry were used to quantitatively identify the proteins in urine and stone matrix. Infrared spectroscopy showed that the sediment was 60 % protein and 40 % guaifenesin and its metabolite guaiacol. Of the 156 distinct proteins identified in the proteomic studies, 49 were identified in the two stone-components with approximately 50 % of those proteins also found in this patient's urine. Many proteins observed in this drug-related stone have also been reported in proteomic matrix studies of uric acid and calcium containing stones. More importantly, nine proteins were highly enriched and highly abundant in the stone matrix and 8 were reciprocally depleted in urine, suggesting a critical role for these proteins in guaifenesin stone formation. Accurate stone analysis is critical to proper diagnosis and treatment of kidney stones. Many matrix proteins were common to all stone types, but likely not related to disease mechanism. This protocol defined a small set of proteins that were likely critical to guaifenesin stone formation based on their high enrichment and high abundance in stone matrix, and it should be applied to all stone types.

  11. A Large-scale Finite Element Model on Micromechanical Damage and Failure of Carbon Fiber/Epoxy Composites Including Thermal Residual Stress

    NASA Astrophysics Data System (ADS)

    Liu, P. F.; Li, X. K.

    2018-06-01

    The purpose of this paper is to study micromechanical progressive failure properties of carbon fiber/epoxy composites with thermal residual stress by finite element analysis (FEA). Composite microstructures with hexagonal fiber distribution are used for the representative volume element (RVE), where an initial fiber breakage is assumed. Fiber breakage with random fiber strength is predicted using Monte Carlo simulation, progressive matrix damage is predicted by proposing a continuum damage mechanics model and interface failure is simulated using Xu and Needleman's cohesive model. Temperature dependent thermal expansion coefficients for epoxy matrix are used. FEA by developing numerical codes using ANSYS finite element software is divided into two steps: 1. Thermal residual stresses due to mismatch between fiber and matrix are calculated; 2. Longitudinal tensile load is further exerted on the RVE to perform progressive failure analysis of carbon fiber/epoxy composites. Numerical convergence is solved by introducing the viscous damping effect properly. The extended Mori-Tanaka method that considers interface debonding is used to get homogenized mechanical responses of composites. Three main results by FEA are obtained: 1. the real-time matrix cracking, fiber breakage and interface debonding with increasing tensile strain is simulated. 2. the stress concentration coefficients on neighbouring fibers near the initial broken fiber and the axial fiber stress distribution along the broken fiber are predicted, compared with the results using the global and local load-sharing models based on the shear-lag theory. 3. the tensile strength of composite by FEA is compared with those by the shear-lag theory and experiments. Finally, the tensile stress-strain curve of composites by FEA is applied to the progressive failure analysis of composite pressure vessel.

  12. A Large-scale Finite Element Model on Micromechanical Damage and Failure of Carbon Fiber/Epoxy Composites Including Thermal Residual Stress

    NASA Astrophysics Data System (ADS)

    Liu, P. F.; Li, X. K.

    2017-09-01

    The purpose of this paper is to study micromechanical progressive failure properties of carbon fiber/epoxy composites with thermal residual stress by finite element analysis (FEA). Composite microstructures with hexagonal fiber distribution are used for the representative volume element (RVE), where an initial fiber breakage is assumed. Fiber breakage with random fiber strength is predicted using Monte Carlo simulation, progressive matrix damage is predicted by proposing a continuum damage mechanics model and interface failure is simulated using Xu and Needleman's cohesive model. Temperature dependent thermal expansion coefficients for epoxy matrix are used. FEA by developing numerical codes using ANSYS finite element software is divided into two steps: 1. Thermal residual stresses due to mismatch between fiber and matrix are calculated; 2. Longitudinal tensile load is further exerted on the RVE to perform progressive failure analysis of carbon fiber/epoxy composites. Numerical convergence is solved by introducing the viscous damping effect properly. The extended Mori-Tanaka method that considers interface debonding is used to get homogenized mechanical responses of composites. Three main results by FEA are obtained: 1. the real-time matrix cracking, fiber breakage and interface debonding with increasing tensile strain is simulated. 2. the stress concentration coefficients on neighbouring fibers near the initial broken fiber and the axial fiber stress distribution along the broken fiber are predicted, compared with the results using the global and local load-sharing models based on the shear-lag theory. 3. the tensile strength of composite by FEA is compared with those by the shear-lag theory and experiments. Finally, the tensile stress-strain curve of composites by FEA is applied to the progressive failure analysis of composite pressure vessel.

  13. Detection Performance of Horizontal Linear Hydrophone Arrays in Shallow Water.

    DTIC Science & Technology

    1980-12-15

    random phase G gain G angle interval covariance matrix h processor vector H matrix matched filter; generalized beamformer I unity matrix 4 SACLANTCEN SR...omnidirectional sensor is h*Ph P G = - h [Eq. 47] G = h* Q h P s The following two sections evaluate a few examples of application of the OLP. Following the...At broadside the signal covariance matrix reduces to a dyadic: P 󈧬 s s*;therefore, the gain (e.g. Eq. 37) becomes tr(H* P H) Pn * -1 Q -1 Pn G ~OQp

  14. The wasteland of random supergravities

    NASA Astrophysics Data System (ADS)

    Marsh, David; McAllister, Liam; Wrase, Timm

    2012-03-01

    We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.

  15. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  16. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  17. Analysis of network clustering behavior of the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Chen, Huan; Mai, Yong; Li, Sai-Ping

    2014-11-01

    Random Matrix Theory (RMT) and the decomposition of correlation matrix method are employed to analyze spatial structure of stocks interactions and collective behavior in the Shanghai and Shenzhen stock markets in China. The result shows that there exists prominent sector structures, with subsectors including the Real Estate (RE), Commercial Banks (CB), Pharmaceuticals (PH), Distillers&Vintners (DV) and Steel (ST) industries. Furthermore, the RE and CB subsectors are mostly anti-correlated. We further study the temporal behavior of the dataset and find that while the sector structures are relatively stable from 2007 through 2013, the correlation between the real estate and commercial bank stocks shows large variations. By employing the ensemble empirical mode decomposition (EEMD) method, we show that this anti-correlation behavior is closely related to the monetary and austerity policies of the Chinese government during the period of study.

  18. Characterizations of matrix and operator-valued Φ-entropies, and operator Efron-Stein inequalities.

    PubMed

    Cheng, Hao-Chung; Hsieh, Min-Hsiu

    2016-03-01

    We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19 , 1-30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron-Stein inequality.

  19. PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.

    1998-01-01

    PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.

  20. Complex network analysis of conventional and Islamic stock market in Indonesia

    NASA Astrophysics Data System (ADS)

    Rahmadhani, Andri; Purqon, Acep; Kim, Sehyun; Kim, Soo Yong

    2015-09-01

    The rising popularity of Islamic financial products in Indonesia has become a new interesting topic to be analyzed recently. We introduce a complex network analysis to compare conventional and Islamic stock market in Indonesia. Additionally, Random Matrix Theory (RMT) has been added as a part of reference to expand the analysis of the result. Both of them are based on the cross correlation matrix of logarithmic price returns. Closing price data, which is taken from June 2011 to July 2012, is used to construct logarithmic price returns. We also introduce the threshold value using winner-take-all approach to obtain scale-free property of the network. This means that the nodes of the network that has a cross correlation coefficient below the threshold value should not be connected with an edge. As a result, we obtain 0.5 as the threshold value for all of the stock market. From the RMT analysis, we found that there is only market wide effect on both stock market and no clustering effect has been found yet. From the network analysis, both of stock market networks are dominated by the mining sector. The length of time series of closing price data must be expanded to get more valuable results, even different behaviors of the system.

  1. Use of Matrix Sampling Procedures to Assess Achievement in Solving Open Addition and Subtraction Sentences.

    ERIC Educational Resources Information Center

    Montague, Margariete A.

    This study investigated the feasibility of concurrently and randomly sampling examinees and items in order to estimate group achievement. Seven 32-item tests reflecting a 640-item universe of simple open sentences were used such that item selection (random, systematic) and assignment (random, systematic) of items (four, eight, sixteen) to forms…

  2. Individual complex Dirac eigenvalue distributions from random matrix theory and comparison to quenched lattice QCD with a quark chemical potential.

    PubMed

    Akemann, G; Bloch, J; Shifrin, L; Wettig, T

    2008-01-25

    We analyze how individual eigenvalues of the QCD Dirac operator at nonzero quark chemical potential are distributed in the complex plane. Exact and approximate analytical results for both quenched and unquenched distributions are derived from non-Hermitian random matrix theory. When comparing these to quenched lattice QCD spectra close to the origin, excellent agreement is found for zero and nonzero topology at several values of the quark chemical potential. Our analytical results are also applicable to other physical systems in the same symmetry class.

  3. Fidelity under isospectral perturbations: a random matrix study

    NASA Astrophysics Data System (ADS)

    Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.

    2013-07-01

    The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.

  4. Sector Identification in a Set of Stock Return Time Series Traded at the London Stock Exchange

    NASA Astrophysics Data System (ADS)

    Coronnello, C.; Tumminello, M.; Lillo, F.; Micciche, S.; Mantegna, R. N.

    2005-09-01

    We compare some methods recently used in the literature to detect the existence of a certain degree of common behavior of stock returns belonging to the same economic sector. Specifically, we discuss methods based on random matrix theory and hierarchical clustering techniques. We apply these methods to a portfolio of stocks traded at the London Stock Exchange. The investigated time series are recorded both at a daily time horizon and at a 5-minute time horizon. The correlation coefficient matrix is very different at different time horizons confirming that more structured correlation coefficient matrices are observed for long time horizons. All the considered methods are able to detect economic information and the presence of clusters characterized by the economic sector of stocks. However, different methods present a different degree of sensitivity with respect to different sectors. Our comparative analysis suggests that the application of just a single method could not be able to extract all the economic information present in the correlation coefficient matrix of a stock portfolio.

  5. Matrix photochemical study and conformational analysis of CH3C(O)NCS and CF3C(O)NCS.

    PubMed

    Ramos, Luis A; Ulic, Sonia E; Romano, Rosana M; Beckers, Helmut; Willner, Helge; Della Védova, Carlos O

    2014-01-30

    The vapor of acetyl isocyanide, CH3C(O)NCS, and trifluoroacetyl isocyanide, CF3C(O)NCS, were isolated in solid Ar at 15 K. The existence of rotational isomerism was confirmed when the matrixes were irradiated with broad-band UV-vis light (200 ≤ λ ≤ 800 nm) and also by temperature-dependent Ar-matrix IR spectroscopy. The initial spectra showed the vapor of CH3C(O)NCS and CF3C(O)NCS consist of two conformers syn-syn and syn-anti (with the C═O bond syn with respect to the C-H or C-F bond and syn or anti with respect to the N═C double bond). When CH3C(O)NCS is irradiated, simultaneously with the randomization process, H2CCO and HSCN are produced. In the case of the photolysis of CF3C(O)NCS, the main products are CF3NCS and CO. The assignment of the IR bands to the different photoproducts was made on the basis of the usual criteria, taking account reported antecedents in the literature.

  6. Stochastic process approximation for recursive estimation with guaranteed bound on the error covariance

    NASA Technical Reports Server (NTRS)

    Menga, G.

    1975-01-01

    An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.

  7. Pixel electronic noise as a function of position in an active matrix flat panel imaging array

    NASA Astrophysics Data System (ADS)

    Yazdandoost, Mohammad Y.; Wu, Dali; Karim, Karim S.

    2010-04-01

    We present an analysis of output referred pixel electronic noise as a function of position in the active matrix array for both active and passive pixel architectures. Three different noise sources for Active Pixel Sensor (APS) arrays are considered: readout period noise, reset period noise and leakage current noise of the reset TFT during readout. For the state-of-the-art Passive Pixel Sensor (PPS) array, the readout noise of the TFT switch is considered. Measured noise results are obtained by modeling the array connections with RC ladders on a small in-house fabricated prototype. The results indicate that the pixels in the rows located in the middle part of the array have less random electronic noise at the output of the off-panel charge amplifier compared to the ones in rows at the two edges of the array. These results can help optimize for clearer images as well as help define the region-of-interest with the best signal-to-noise ratio in an active matrix digital flat panel imaging array.

  8. The number of measurements needed to obtain high reliability for traits related to enzymatic activities and photosynthetic compounds in soybean plants infected with Phakopsora pachyrhizi.

    PubMed

    Oliveira, Tássia Boeno de; Azevedo Peixoto, Leonardo de; Teodoro, Paulo Eduardo; Alvarenga, Amauri Alves de; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann

    2018-01-01

    Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability.

  9. The number of measurements needed to obtain high reliability for traits related to enzymatic activities and photosynthetic compounds in soybean plants infected with Phakopsora pachyrhizi

    PubMed Central

    de Oliveira, Tássia Boeno; Teodoro, Paulo Eduardo; de Alvarenga, Amauri Alves; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann

    2018-01-01

    Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability. PMID:29438380

  10. Effects of imputation on correlation: implications for analysis of mass spectrometry data from multiple biological matrices.

    PubMed

    Taylor, Sandra L; Ruhaak, L Renee; Kelly, Karen; Weiss, Robert H; Kim, Kyoungmi

    2017-03-01

    With expanded access to, and decreased costs of, mass spectrometry, investigators are collecting and analyzing multiple biological matrices from the same subject such as serum, plasma, tissue and urine to enhance biomarker discoveries, understanding of disease processes and identification of therapeutic targets. Commonly, each biological matrix is analyzed separately, but multivariate methods such as MANOVAs that combine information from multiple biological matrices are potentially more powerful. However, mass spectrometric data typically contain large amounts of missing values, and imputation is often used to create complete data sets for analysis. The effects of imputation on multiple biological matrix analyses have not been studied. We investigated the effects of seven imputation methods (half minimum substitution, mean substitution, k-nearest neighbors, local least squares regression, Bayesian principal components analysis, singular value decomposition and random forest), on the within-subject correlation of compounds between biological matrices and its consequences on MANOVA results. Through analysis of three real omics data sets and simulation studies, we found the amount of missing data and imputation method to substantially change the between-matrix correlation structure. The magnitude of the correlations was generally reduced in imputed data sets, and this effect increased with the amount of missing data. Significant results from MANOVA testing also were substantially affected. In particular, the number of false positives increased with the level of missing data for all imputation methods. No one imputation method was universally the best, but the simple substitution methods (Half Minimum and Mean) consistently performed poorly. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  11. A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects.

    PubMed

    Henschel, Volkmar; Engel, Jutta; Hölzel, Dieter; Mansmann, Ulrich

    2009-02-10

    Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty. MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework. Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN. The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.

  12. Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks

    NASA Astrophysics Data System (ADS)

    Frahm, Klaus M.; Shepelyansky, Dima L.

    2014-04-01

    We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.

  13. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    NASA Astrophysics Data System (ADS)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  14. Fingerprint recognition of alien invasive weeds based on the texture character and machine learning

    NASA Astrophysics Data System (ADS)

    Yu, Jia-Jia; Li, Xiao-Li; He, Yong; Xu, Zheng-Hao

    2008-11-01

    Multi-spectral imaging technique based on texture analysis and machine learning was proposed to discriminate alien invasive weeds with similar outline but different categories. The objectives of this study were to investigate the feasibility of using Multi-spectral imaging, especially the near-infrared (NIR) channel (800 nm+/-10 nm) to find the weeds' fingerprints, and validate the performance with specific eigenvalues by co-occurrence matrix. Veronica polita Pries, Veronica persica Poir, longtube ground ivy, Laminum amplexicaule Linn. were selected in this study, which perform different effect in field, and are alien invasive species in China. 307 weed leaves' images were randomly selected for the calibration set, while the remaining 207 samples for the prediction set. All images were pretreated by Wallis filter to adjust the noise by uneven lighting. Gray level co-occurrence matrix was applied to extract the texture character, which shows density, randomness correlation, contrast and homogeneity of texture with different algorithms. Three channels (green channel by 550 nm+/-10 nm, red channel by 650 nm+/-10 nm and NIR channel by 800 nm+/-10 nm) were respectively calculated to get the eigenvalues.Least-squares support vector machines (LS-SVM) was applied to discriminate the categories of weeds by the eigenvalues from co-occurrence matrix. Finally, recognition ratio of 83.35% by NIR channel was obtained, better than the results by green channel (76.67%) and red channel (69.46%). The prediction results of 81.35% indicated that the selected eigenvalues reflected the main characteristics of weeds' fingerprint based on multi-spectral (especially by NIR channel) and LS-SVM model.

  15. Eigenvalues of Random Matrices with Isotropic Gaussian Noise and the Design of Diffusion Tensor Imaging Experiments.

    PubMed

    Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J

    2017-01-01

    Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D , observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄ . When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model.

  16. Eigenvalues of Random Matrices with Isotropic Gaussian Noise and the Design of Diffusion Tensor Imaging Experiments*

    PubMed Central

    Gasbarra, Dario; Pajevic, Sinisa; Basser, Peter J.

    2017-01-01

    Tensor-valued and matrix-valued measurements of different physical properties are increasingly available in material sciences and medical imaging applications. The eigenvalues and eigenvectors of such multivariate data provide novel and unique information, but at the cost of requiring a more complex statistical analysis. In this work we derive the distributions of eigenvalues and eigenvectors in the special but important case of m×m symmetric random matrices, D, observed with isotropic matrix-variate Gaussian noise. The properties of these distributions depend strongly on the symmetries of the mean tensor/matrix, D̄. When D̄ has repeated eigenvalues, the eigenvalues of D are not asymptotically Gaussian, and repulsion is observed between the eigenvalues corresponding to the same D̄ eigenspaces. We apply these results to diffusion tensor imaging (DTI), with m = 3, addressing an important problem of detecting the symmetries of the diffusion tensor, and seeking an experimental design that could potentially yield an isotropic Gaussian distribution. In the 3-dimensional case, when the mean tensor is spherically symmetric and the noise is Gaussian and isotropic, the asymptotic distribution of the first three eigenvalue central moment statistics is simple and can be used to test for isotropy. In order to apply such tests, we use quadrature rules of order t ≥ 4 with constant weights on the unit sphere to design a DTI-experiment with the property that isotropy of the underlying true tensor implies isotropy of the Fisher information. We also explain the potential implications of the methods using simulated DTI data with a Rician noise model. PMID:28989561

  17. Genetic diversity study of Chromobacterium violaceum isolated from Kolli Hills by amplified ribosomal DNA restriction analysis (ARDRA) and random amplified polymorphic DNA (RAPD).

    PubMed

    Ponnusamy, K; Jose, S; Savarimuthu, I; Michael, G P; Redenbach, M

    2011-09-01

    Chromobacterium are saprophytes that cause highly fatal opportunistic infections. Identification and strain differentiation were performed to identify the strain variability among the environmental samples. We have evaluated the suitability of individual and combined methods to detect the strain variations of the samples collected in different seasons. Amplified ribosomal DNA restriction analysis (ARDRA) and random amplified polymorphic DNA (RAPD) profiles were obtained using four different restriction enzyme digestions (AluI, HaeIII, MspI and RsaI) and five random primers. A matrix of dice similarity coefficients was calculated and used to compare these restriction patterns. ARDRA showed rapid differentiation of strains based on 16S rDNA, but the combined RAPD and ARDRA gave a more reliable differentiation than when either of them was analysed individually. A high level of genetic diversity was observed, which indicates that the Kolli Hills' C. violaceum isolates would fall into at least three new clusters. Results showed a noteworthy bacterial variation and genetic diversity of C. violaceum in the unexplored, virgin forest area. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.

  18. A scattering database of marine particles and its application in optical analysis

    NASA Astrophysics Data System (ADS)

    Xu, G.; Yang, P.; Kattawar, G.; Zhang, X.

    2016-12-01

    In modeling the scattering properties of marine particles (e.g. phytoplankton), the laboratory studies imply a need to properly account for the influence of particle morphology, in addition to size and composition. In this study, a marine particle scattering database is constructed using a collection of distorted hexahedral shapes. Specifically, the scattering properties of each size bin and refractive index are obtained by the ensemble average associated with distorted hexahedra with randomly tilted facets and selected aspect ratios (from elongated to flattened). The randomness degree in shape-generation process defines the geometric irregularity of the particles in the group. The geometric irregularity and particle aspect ratios constitute a set of "shape factors" to be accounted for (e.g. in best-fit analysis). To cover most of the marine particle size range, we combine the Invariant Imbedding T-matrix (II-TM) method and the Physical-Geometric Optics Hybrid (PGOH) method in the calculations. The simulated optical properties are shown and compared with those obtained from Lorenz-Mie Theory. Using the scattering database, we present a preliminary optical analysis of laboratory-measured optical properties of marine particles.

  19. A minimum drives automatic target definition procedure for multi-axis random control testing

    NASA Astrophysics Data System (ADS)

    Musella, Umberto; D'Elia, Giacomo; Carrella, Alex; Peeters, Bart; Mucchi, Emiliano; Marulo, Francesco; Guillaume, Patrick

    2018-07-01

    Multiple-Input Multiple-Output (MIMO) vibration control tests are able to closely replicate, via shakers excitation, the vibration environment that a structure needs to withstand during its operational life. This feature is fundamental to accurately verify the experienced stress state, and ultimately the fatigue life, of the tested structure. In case of MIMO random tests, the control target is a full reference Spectral Density Matrix in the frequency band of interest. The diagonal terms are the Power Spectral Densities (PSDs), representative for the acceleration operational levels, and the off-diagonal terms are the Cross Spectral Densities (CSDs). The specifications of random vibration tests are however often given in terms of PSDs only, coming from a legacy of single axis testing. Information about the CSDs is often missing. An accurate definition of the CSD profiles can further enhance the MIMO random testing practice, as these terms influence both the responses and the shaker's voltages (the so-called drives). The challenges are linked to the algebraic constraint that the full reference matrix must be positive semi-definite in the entire bandwidth, with no flexibility in modifying the given PSDs. This paper proposes a newly developed method that automatically provides the full reference matrix without modifying the PSDs, considered as test specifications. The innovative feature is the capability of minimizing the drives required to match the reference PSDs and, at the same time, to directly guarantee that the obtained full matrix is positive semi-definite. The drives minimization aims on one hand to reach the fixed test specifications without stressing the delicate excitation system; on the other hand it potentially allows to further increase the test levels. The detailed analytic derivation and implementation steps of the proposed method are followed by real-life testing considering different scenarios.

  20. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX - RAdiation Dose study (RAD-MATRIX).

    PubMed

    Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco

    2014-06-01

    Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  2. Characterizations of matrix and operator-valued Φ-entropies, and operator Efron–Stein inequalities

    PubMed Central

    Cheng, Hao-Chung; Hsieh, Min-Hsiu

    2016-01-01

    We derive new characterizations of the matrix Φ-entropy functionals introduced in Chen & Tropp (Chen, Tropp 2014 Electron. J. Prob. 19, 1–30. (doi:10.1214/ejp.v19-2964)). These characterizations help us to better understand the properties of matrix Φ-entropies, and are a powerful tool for establishing matrix concentration inequalities for random matrices. Then, we propose an operator-valued generalization of matrix Φ-entropy functionals, and prove the subadditivity under Löwner partial ordering. Our results demonstrate that the subadditivity of operator-valued Φ-entropies is equivalent to the convexity. As an application, we derive the operator Efron–Stein inequality. PMID:27118909

  3. QCD dirac operator at nonzero chemical potential: lattice data and matrix model.

    PubMed

    Akemann, Gernot; Wettig, Tilo

    2004-03-12

    Recently, a non-Hermitian chiral random matrix model was proposed to describe the eigenvalues of the QCD Dirac operator at nonzero chemical potential. This matrix model can be constructed from QCD by mapping it to an equivalent matrix model which has the same symmetries as QCD with chemical potential. Its microscopic spectral correlations are conjectured to be identical to those of the QCD Dirac operator. We investigate this conjecture by comparing large ensembles of Dirac eigenvalues in quenched SU(3) lattice QCD at a nonzero chemical potential to the analytical predictions of the matrix model. Excellent agreement is found in the two regimes of weak and strong non-Hermiticity, for several different lattice volumes.

  4. Random acoustic metamaterial with a subwavelength dipolar resonance.

    PubMed

    Duranteau, Mickaël; Valier-Brasier, Tony; Conoir, Jean-Marc; Wunenburger, Régis

    2016-06-01

    The effective velocity and attenuation of longitudinal waves through random dispersions of rigid, tungsten-carbide beads in an elastic matrix made of epoxy resin in the range of beads volume fraction 2%-10% are determined experimentally. The multiple scattering model proposed by Luppé, Conoir, and Norris [J. Acoust. Soc. Am. 131(2), 1113-1120 (2012)], which fully takes into account the elastic nature of the matrix and the associated mode conversions, accurately describes the measurements. Theoretical calculations show that the rigid particles display a local, dipolar resonance which shares several features with Minnaert resonance of bubbly liquids and with the dipolar resonance of core-shell particles. Moreover, for the samples under study, the main cause of smoothing of the dipolar resonance of the scatterers and the associated variations of the effective mass density of the dispersions is elastic relaxation, i.e., the finite time required for the shear stresses associated to the translational motion of the scatterers to propagate through the matrix. It is shown that its influence is governed solely by the value of the particle to matrix mass density contrast.

  5. 0 ν β β -decay nuclear matrix element for light and heavy neutrino mass mechanisms from deformed quasiparticle random-phase approximation calculations for 76Ge, 82Se, 130Te, 136Xe, and 150Nd with isospin restoration

    NASA Astrophysics Data System (ADS)

    Fang, Dong-Liang; Faessler, Amand; Šimkovic, Fedor

    2018-04-01

    In this paper, with restored isospin symmetry, we evaluated the neutrinoless double-β -decay nuclear matrix elements for 76Ge, 82Se, 130Te, 136Xe, and 150Nd for both the light and heavy neutrino mass mechanisms using the deformed quasiparticle random-phase approximation approach with realistic forces. We give detailed decompositions of the nuclear matrix elements over different intermediate states and nucleon pairs, and discuss how these decompositions are affected by the model space truncations. Compared to the spherical calculations, our results show reductions from 30 % to about 60 % of the nuclear matrix elements for the calculated isotopes mainly due to the presence of the BCS overlap factor between the initial and final ground states. The comparison between different nucleon-nucleon (NN) forces with corresponding short-range correlations shows that the choice of the NN force gives roughly 20 % deviations for the light exchange neutrino mechanism and much larger deviations for the heavy neutrino exchange mechanism.

  6. Acellular dermal matrix allograft versus free gingival graft: a histological evaluation and split-mouth randomized clinical trial.

    PubMed

    de Resende, Daniel Romeu Benchimol; Greghi, Sebastião Luiz Aguiar; Siqueira, Aline Franco; Benfatti, César Augusto Magalhães; Damante, Carla Andreotti; Ragghianti Zangrando, Mariana Schutzer

    2018-04-30

    This split-mouth controlled randomized clinical trial evaluated clinical and histological results of acellular dermal matrix allograft (ADM) compared to autogenous free gingival graft (FGG) for keratinized tissue augmentation. Twenty-five patients with the absence or deficiency of keratinized tissue (50 sites) were treated with FGG (control group) and ADM (test group). Clinical parameters included keratinized tissue width (KTW) (primary outcome), soft tissue thickness (TT), recession depth (RD), probing depth (PD), and clinical attachment level (CAL). Esthetic perception was evaluated by patients and by a calibrated periodontist using visual analog scale (VAS). Histological analysis included biopsies of five different patients from both test and control sites for each evaluation period (n = 25). The analysis included percentage of connective tissue components, epithelial luminal to basal surface ratio, tissue maturation, and presence of elastic fibers. Data were evaluated by ANOVA complemented by Tukey's tests (p < 0.05). After 6 months, PD and CAL demonstrated no differences between groups. ADM presented higher RD compared to FGG in all periods. Mean tissue shrinkage for control and test groups was 12.41 versus 55.7%. TT was inferior for ADM group compared to FGG. Esthetics perception by professional evaluation showed superior results for ADM. Histomorphometric analysis demonstrated higher percentage of cellularity, blood vessels, and epithelial luminal to basal surface ratio for FGG group. ADM group presented higher percentage of collagen fibers and inflammatory infiltrate. Both treatments resulted in improvement of clinical parameters, except for RD. ADM group presented more tissue shrinkage and delayed healing, confirmed histologically, but superior professional esthetic perception. This study added important clinical and histological data to contribute in the decision-making process between indication of FGG or ADM.

  7. Balancing strength and toughness of calcium-silicate-hydrate via random nanovoids and particle inclusions: Atomistic modeling and statistical analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Shahsavari, Rouzbeh

    2016-11-01

    As the most widely used manufactured material on Earth, concrete poses serious societal and environmental concerns which call for innovative strategies to develop greener concrete with improved strength and toughness, properties that are exclusive in man-made materials. Herein, we focus on calcium silicate hydrate (C-S-H), the major binding phase of all Portland cement concretes, and study how engineering its nanovoids and portlandite particle inclusions can impart a balance of strength, toughness and stiffness. By performing an extensive +600 molecular dynamics simulations coupled with statistical analysis tools, our results provide new evidence of ductile fracture mechanisms in C-S-H - reminiscent of crystalline alloys and ductile metals - decoding the interplay between the crack growth, nanovoid/particle inclusions, and stoichiometry, which dictates the crystalline versus amorphous nature of the underlying matrix. We found that introduction of voids and portlandite particles can significantly increase toughness and ductility, specially in C-S-H with more amorphous matrices, mainly owing to competing mechanisms of crack deflection, voids coalescence, internal necking, accommodation, and geometry alteration of individual voids/particles, which together regulate toughness versus strength. Furthermore, utilizing a comprehensive global sensitivity analysis on random configuration-property relations, we show that the mean diameter of voids/particles is the most critical statistical parameter influencing the mechanical properties of C-S-H, irrespective of stoichiometry or crystalline or amorphous nature of the matrix. This study provides new fundamental insights, design guidelines, and de novo strategies to turn the brittle C-S-H into a ductile material, impacting modern engineering of strong and tough concrete infrastructures and potentially other complex brittle materials.

  8. Adjoints and Low-rank Covariance Representation

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.

    2000-01-01

    Quantitative measures of the uncertainty of Earth System estimates can be as important as the estimates themselves. Second moments of estimation errors are described by the covariance matrix, whose direct calculation is impractical when the number of degrees of freedom of the system state is large. Ensemble and reduced-state approaches to prediction and data assimilation replace full estimation error covariance matrices by low-rank approximations. The appropriateness of such approximations depends on the spectrum of the full error covariance matrix, whose calculation is also often impractical. Here we examine the situation where the error covariance is a linear transformation of a forcing error covariance. We use operator norms and adjoints to relate the appropriateness of low-rank representations to the conditioning of this transformation. The analysis is used to investigate low-rank representations of the steady-state response to random forcing of an idealized discrete-time dynamical system.

  9. Single and double beta decays in the A=100, A=116 and A=128 triplets of isobars

    NASA Astrophysics Data System (ADS)

    Suhonen, J.; Civitarese, O.

    2014-04-01

    In this paper we analyze the ground-state-to-ground-state two-neutrino double beta (2νββ) decays and single EC and β- decays for the A=100 (100Mo-100Tc-100Ru), A=116 (116Cd-116In-116Sn) and A=128 (128Te-128I-128Xe) triplets of isobars. We use the proton-neutron quasiparticle random-phase approximation (pnQRPA) with realistic G-matrix-derived effective interactions in very large single-particle bases. The purpose is to access the effective value of the axial-vector coupling constant gA in the pnQRPA calculations. We show that the three triplets of isobars represent systems with different characteristics of orbital occupancies and cumulative 2νββ nuclear matrix elements. Our analysis points to a considerably quenched averaged effective value of ≈0.6±0.2 in the pnQRPA calculations.

  10. Matching 4.7-Å XRD spacing in amelogenin nanoribbons and enamel matrix.

    PubMed

    Sanii, B; Martinez-Avila, O; Simpliciano, C; Zuckermann, R N; Habelitz, S

    2014-09-01

    The recent discovery of conditions that induce nanoribbon structures of amelogenin protein in vitro raises questions about their role in enamel formation. Nanoribbons of recombinant human full-length amelogenin (rH174) are about 17 nm wide and self-align into parallel bundles; thus, they could act as templates for crystallization of nanofibrous apatite comprising dental enamel. Here we analyzed the secondary structures of nanoribbon amelogenin by x-ray diffraction (XRD) and Fourier transform infrared spectroscopy (FTIR) and tested if the structural motif matches previous data on the organic matrix of enamel. XRD analysis showed that a peak corresponding to 4.7 Å is present in nanoribbons of amelogenin. In addition, FTIR analysis showed that amelogenin in the form of nanoribbons was comprised of β-sheets by up to 75%, while amelogenin nanospheres had predominantly random-coil structure. The observation of a 4.7-Å XRD spacing confirms the presence of β-sheets and illustrates structural parallels between the in vitro assemblies and structural motifs in developing enamel. © International & American Associations for Dental Research.

  11. Probabilistic Analysis of a SiC/SiC Ceramic Matrix Composite Turbine Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Nemeth, Noel N.; Brewer, David N.; Mital, Subodh

    2004-01-01

    To demonstrate the advanced composite materials technology under development within the Ultra-Efficient Engine Technology (UEET) Program, it was planned to fabricate, test, and analyze a turbine vane made entirely of silicon carbide-fiber-reinforced silicon carbide matrix composite (SiC/SiC CMC) material. The objective was to utilize a five-harness satin weave melt-infiltrated (MI) SiC/SiC composite material developed under this program to design and fabricate a stator vane that can endure 1000 hours of engine service conditions. The vane was designed such that the expected maximum stresses were kept within the proportional limit strength of the material. Any violation of this design requirement was considered as the failure. This report presents results of a probabilistic analysis and reliability assessment of the vane. Probability of failure to meet the design requirements was computed. In the analysis, material properties, strength, and pressure loading were considered as random variables. The pressure loads were considered normally distributed with a nominal variation. A temperature profile on the vane was obtained by performing a computational fluid dynamics (CFD) analysis and was assumed to be deterministic. The results suggest that for the current vane design, the chance of not meeting design requirements is about 1.6 percent.

  12. Equilibrium structure of δ-Bi(2)O(3) from first principles.

    PubMed

    Music, Denis; Konstantinidis, Stephanos; Schneider, Jochen M

    2009-04-29

    Using ab initio calculations, we have systematically studied the structure of δ-Bi(2)O(3) (fluorite prototype, 25% oxygen vacancies) probing [Formula: see text] and combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering, random distribution of oxygen vacancies with two different statistical descriptions as well as local relaxations. We observe that the combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering is the most stable configuration. Radial distribution functions for these configurations can be classified as discrete (ordered configurations) and continuous (random configurations). This classification can be understood on the basis of local structural relaxations. Up to 28.6% local relaxation of the oxygen sublattice is present in the random configurations, giving rise to continuous distribution functions. The phase stability obtained may be explained with the bonding analysis. Electron lone-pair charges in the predominantly ionic Bi-O matrix may stabilize the combined [Formula: see text] and [Formula: see text] oxygen vacancy ordering.

  13. The Kubo-Greenwood formula as a result of the random phase approximation for the electrons of the metal

    NASA Astrophysics Data System (ADS)

    Ivliev, S. V.

    2017-12-01

    For calculation of short laser pulse absorption in metal the imaginary part of permittivity, which is simply related to the conductivity, is required. Currently to find the static and dynamic conductivity the Kubo-Greenwood formula is most commonly used. It describes the electromagnetic energy absorption in the one-electron approach. In the present study, this formula is derived directly from the expression for the permittivity expression in the random phase approximation, which in fact is equivalent to the method of the mean field. The detailed analysis of the role of electron-electron interaction in the calculation of the matrix elements of the velocity operator is given. It is shown that in the one-electron random phase approximation the single-particle conductive electron wave functions in the field of fixed ions should be used. The possibility of considering the exchange and correlation effects by means of an amendment to a local function field is discussed.

  14. Depth profiling of high energy nitrogen ions implanted in the <1 0 0>, <1 1 0> and randomly oriented silicon crystals

    NASA Astrophysics Data System (ADS)

    Erić, M.; Petrović, S.; Kokkoris, M.; Lagoyannis, A.; Paneta, V.; Harissopulos, S.; Telečki, I.

    2012-03-01

    This work reports on the experimentally obtained depth profiles of 4 MeV 14N2+ ions implanted in the <1 0 0>, <1 1 0> and randomly oriented silicon crystals. The ion fluence was 1017 particles/cm2. The nitrogen depth profiling has been performed using the Nuclear Reaction Analysis (NRA) method, via the study of 14N(d,α0)12C and 14N(d,α1)12C nuclear reactions, and with the implementation of SRIM 2010 and SIMNRA computer simulation codes. For the randomly oriented silicon crystal, change of the density of silicon matrix and the nitrogen "bubble" formation have been proposed as the explanation for the difference between the experimental and simulated nitrogen depth profiles. During the implantation, the RBS/C spectra were measured on the nitrogen implanted and on the virgin crystal spots. These spectra provide information on the amorphization of the silicon crystals induced by the ion implantation.

  15. The effect of platelet-rich fibrin matrix on rotator cuff tendon healing: a prospective, randomized clinical study.

    PubMed

    Rodeo, Scott A; Delos, Demetris; Williams, Riley J; Adler, Ronald S; Pearle, Andrew; Warren, Russell F

    2012-06-01

    There is a strong need for methods to improve the biological potential of rotator cuff tendon healing. Platelet-rich fibrin matrix (PRFM) allows delivery of autologous cytokines to healing tissue, and limited evidence suggests a positive effect of platelet-rich plasma on tendon biology. To evaluate the effect of platelet-rich fibrin matrix on rotator cuff tendon healing. Randomized controlled trial; Level of evidence, 2. Seventy-nine patients undergoing arthroscopic rotator cuff tendon repair were randomized intraoperatively to either receive PRFM at the tendon-bone interface (n = 40) or standard repair with no PRFM (n = 39). Standardized repair techniques were used for all patients. The postoperative rehabilitation protocol was the same in both groups. The primary outcome was tendon healing evaluated by ultrasound (intact vs defect at repair site) at 6 and 12 weeks. Power Doppler ultrasound was also used to evaluate vascularity in the peribursal, peritendinous, and musculotendinous and insertion site areas of the tendon and bone anchor site. Secondary outcomes included standardized shoulder outcome scales (American Shoulder and Elbow Surgeons [ASES] and L'Insalata) and strength measurements using a handheld dynamometer. Patients and the evaluator were blinded to treatment group. All patients were evaluated at minimum 1-year follow-up. A logistic regression model was used to predict outcome (healed vs defect) based on tear severity, repair type, treatment type (PRFM or control), and platelet count. Overall, there were no differences in tendon-to-bone healing between the PRFM and control groups. Complete tendon-to-bone healing (intact repair) was found in 24 of 36 (67%) in the PRFM group and 25 of 31 (81%) in the control group (P = .20). There were no significant differences in healing by ultrasound between 6 and 12 weeks. There were gradual increases in ASES and L'Insalata scores over time in both groups, but there were no differences in scores between the groups. We also found no difference in vascularity in the peribursal, peritendinous, and musculotendinous areas of the tendon between groups. There were no differences in strength between groups. Platelet count had no effect on healing. Logistic regression analysis demonstrated that PRFM was a significant predictor (P = .037) for a tendon defect at 12 weeks, with an odds ratio of 5.8. Platelet-rich fibrin matrix applied to the tendon-bone interface at the time of rotator cuff repair had no demonstrable effect on tendon healing, tendon vascularity, manual muscle strength, or clinical rating scales. In fact, the regression analysis suggests that PRFM may have a negative effect on healing. Further study is required to evaluate the role of PRFM in rotator cuff repair.

  16. Spectral analysis of finite-time correlation matrices near equilibrium phase transitions

    NASA Astrophysics Data System (ADS)

    Vinayak; Prosen, T.; Buča, B.; Seligman, T. H.

    2014-10-01

    We study spectral densities for systems on lattices, which, at a phase transition display, power-law spatial correlations. Constructing the spatial correlation matrix we prove that its eigenvalue density shows a power law that can be derived from the spatial correlations. In practice time series are short in the sense that they are either not stationary over long time intervals or not available over long time intervals. Also we usually do not have time series for all variables available. We shall make numerical simulations on a two-dimensional Ising model with the usual Metropolis algorithm as time evolution. Using all spins on a grid with periodic boundary conditions we find a power law, that is, for large grids, compatible with the analytic result. We still find a power law even if we choose a fairly small subset of grid points at random. The exponents of the power laws will be smaller under such circumstances. For very short time series leading to singular correlation matrices we use a recently developed technique to lift the degeneracy at zero in the spectrum and find a significant signature of critical behavior even in this case as compared to high temperature results which tend to those of random matrix models.

  17. Investigation on active vibration isolation of a Stewart platform with piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Wang, Chaoxin; Xie, Xiling; Chen, Yanhao; Zhang, Zhiyi

    2016-11-01

    A Stewart platform with piezoelectric actuators is presented for micro-vibration isolation. The Jacobi matrix of the Stewart platform, which reveals the relationship between the position/pointing of the payload and the extensions of the six struts, is derived by kinematic analysis. The dynamic model of the Stewart platform is established by the FRF (frequency response function) synthesis method. In the active control loop, the direct feedback of integrated forces is combined with the FxLMS based adaptive feedback to dampen vibration of inherent modes and suppress transmission of periodic vibrations. Numerical simulations were conducted to prove vibration isolation performance of the Stewart platform under random and periodical disturbances, respectively. In the experiment, the output consistencies of the six piezoelectric actuators were measured at first and the theoretical Jacobi matrix as well as the feedback gain of each piezoelectric actuator was subsequently modified according to the measured consistencies. The direct feedback loop was adjusted to achieve sufficient active damping and the FxLMS based adaptive feedback control was adopted to suppress vibration transmission in the six struts. Experimental results have demonstrated that the Stewart platform can achieve 30 dB attenuation of periodical disturbances and 10-20 dB attenuation of random disturbances in the frequency range of 5-200 Hz.

  18. Tool for Generation of MAC/GMC Representative Unit Cell for CMC/PMC Analysis

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Pineda, Evan J.

    2016-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) 4.0. This tool is especially useful in analyzing ceramic matrix composites (CMCs), where higher fidelity with improved accuracy of local response is needed. The tool, however, can be used for analyzing polymer matrix composites (PMCs) as well. MAC/GMC 4.0 is a composite material and laminate analysis software developed at NASA Glenn Research Center. The software package has been built around the concept of the generalized method of cells (GMC). The computer code is developed with a user friendly framework, along with a library of local inelastic, damage, and failure models. Further, application of simulated thermomechanical loading, generation of output results, and selection of architectures to represent the composite material have been automated to increase the user friendliness, as well as to make it more robust in terms of input preparation and code execution. Finally, classical lamination theory has been implemented within the software, wherein GMC is used to model the composite material response of each ply. Thus, the full range of GMC composite material capabilities is available for analysis of arbitrary laminate configurations as well. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that generates a number of different user-defined repeating unit cells (RUCs). In addition, the code has provisions for generation of a MAC/GMC-compatible input text file that can be merged with any MAC/GMC input file tailored to analyze composite materials. Although the primary intention was to address the three different constituents and phases that are usually present in CMCs-namely, fibers, matrix, and interphase-it can be easily modified to address two-phase polymer matrix composite (PMC) materials where an interphase is absent. Currently, the tool capability includes generation of RUCs for square packing, hexagonal packing, and random fiber packing as well as RUCs based on actual composite micrographs. All these options have the fibers modeled as having a circular cross-sectional area. In addition, a simplified version of RUC is provided where the fibers are treated as having a square cross section and are distributed randomly. This RUC facilitates a speedy analysis using the higher fidelity version of GMC known as HFGMC. The first four mentioned options above support uniform subcell discretization. The last one has variable subcell sizes due to the primary intention of keeping the RUC size to a minimum to gain the speed ups using the higher fidelity version of MAC. The code is implemented within the MATLAB (The Mathworks, Inc., Natick, MA) developmental framework; however, a standalone application that does not need a priori MATLAB installation is also created with the aid of the MATLAB compiler.

  19. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states.

    PubMed

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  20. Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2011-01-01

    The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.

  1. Probabilistic low-rank factorization accelerates tensor network simulations of critical quantum many-body ground states

    NASA Astrophysics Data System (ADS)

    Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone

    2018-01-01

    We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.

  2. Grafton and local bone have comparable outcomes to iliac crest bone in instrumented single-level lumbar fusions.

    PubMed

    Kang, James; An, Howard; Hilibrand, Alan; Yoon, S Tim; Kavanagh, Eoin; Boden, Scott

    2012-05-20

    Prospective multicenter randomized clinical trail. The goal of our 2-year prospective study was to perform a randomized clinical trial comparing the outcomes of Grafton demineralized bone matrix (DBM) Matrix with local bone with that of iliac crest bone graft (ICBG) in a single-level instrumented posterior lumbar fusion. There has been extensive research and development in identifying a suitable substitute to replace autologous ICBG that is associated with known morbidities. DBMs are a class of commercially available grafting agents that are prepared from allograft bone. Many such products have been commercially available for clinical use; however, their efficacy for spine fusion has been mostly based on anecdotal evidence rather than randomized controlled clinical trials. Forty-six patients were randomly assigned (2:1) to receive Grafton DBM Matrix with local bone (30 patients) or autologous ICBG (16 patients). The mean age was 64 (females [F] = 21, males [M] = 9) in the DBM group and 65 (F = 9, M = 5) in the ICBG group. An independent radiologist evaluated plain radiographs and computed tomographic scans at 6-month, 1-year, and 2-year time points. Clinical outcomes were measured using Oswestry Disability Index (ODI) and Medical Outcomes Study 36-Item Short Form Health Survey. Forty-one patients (DBM = 28 and ICBG = 13) completed the 2-year follow-up. Final fusion rates were 86% (Grafton Matrix) versus 92% (ICBG) (P = 1.0 not significant). The Grafton group showed slightly better improvement in ODI score than the ICBG group at the final 2-year follow-up (Grafton [16.2] and ICBG [22.7]); however, the difference was not statistically significant (P = 0.2346 at 24 mo). Grafton showed consistently higher physical function scores at 24 months; however, differences were not statistically significant (P = 0.0823). Similar improvements in the physical component summary scores were seen in both the Grafton and ICBG groups. There was a statistically significant greater mean intraoperative blood loss in the ICBG group than in the Grafton group (P < 0.0031). At 2-year follow-up, subjects who were randomized to Grafton Matrix and local bone achieved an 86% overall fusion rate and improvements in clinical outcomes that were comparable with those in the ICBG group.

  3. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods

    NASA Astrophysics Data System (ADS)

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-01

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.

  4. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods.

    PubMed

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-15

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Studies on Relaxation Behavior of Corona Poled Aromatic Dipolar Molecules in a Polymer Matrix

    DTIC Science & Technology

    1990-08-03

    concentration upto 30 weight percent. Orientation As expected optically responsive molecules are randomly oriented in the polymer matrix although a small amount...INSERT Figure 4 The retention of SH intensity of the small molecule such as MNA was found to be very poor in the PMMA matrix while the larger rodlike...Polym. Prepr. Am. Chem. Soc., Div. Polym. Chem. 24(2), 309 (1983). 16.- H. Ringsdorf and H. W. Schmidt. Makromol. Chem. 185, 1327 (1984). 17. S. Musikant

  6. Comprehensive T-matrix Reference Database: A 2009-2011 Update

    NASA Technical Reports Server (NTRS)

    Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.

    2012-01-01

    The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.

  7. Random pure states: Quantifying bipartite entanglement beyond the linear statistics.

    PubMed

    Vivo, Pierpaolo; Pato, Mauricio P; Oshanin, Gleb

    2016-05-01

    We analyze the properties of entangled random pure states of a quantum system partitioned into two smaller subsystems of dimensions N and M. Framing the problem in terms of random matrices with a fixed-trace constraint, we establish, for arbitrary N≤M, a general relation between the n-point densities and the cross moments of the eigenvalues of the reduced density matrix, i.e., the so-called Schmidt eigenvalues, and the analogous functionals of the eigenvalues of the Wishart-Laguerre ensemble of the random matrix theory. This allows us to derive explicit expressions for two-level densities, and also an exact expression for the variance of von Neumann entropy at finite N,M. Then, we focus on the moments E{K^{a}} of the Schmidt number K, the reciprocal of the purity. This is a random variable supported on [1,N], which quantifies the number of degrees of freedom effectively contributing to the entanglement. We derive a wealth of analytical results for E{K^{a}} for N=2 and 3 and arbitrary M, and also for square N=M systems by spotting for the latter a connection with the probability P(x_{min}^{GUE}≥sqrt[2N]ξ) that the smallest eigenvalue x_{min}^{GUE} of an N×N matrix belonging to the Gaussian unitary ensemble is larger than sqrt[2N]ξ. As a by-product, we present an exact asymptotic expansion for P(x_{min}^{GUE}≥sqrt[2N]ξ) for finite N as ξ→∞. Our results are corroborated by numerical simulations whenever possible, with excellent agreement.

  8. A Note on Parameters of Random Substitutions by γ-Diagonal Matrices

    NASA Astrophysics Data System (ADS)

    Kang, Ju-Sung

    Random substitutions are very useful and practical method for privacy-preserving schemes. In this paper we obtain the exact relationship between the estimation errors and three parameters used in the random substitutions, namely the privacy assurance metric γ, the total number n of data records, and the size N of transition matrix. We also demonstrate some simulations concerning the theoretical result.

  9. Noise in two-color electronic distance meter measurements revisited

    USGS Publications Warehouse

    Langbein, J.

    2004-01-01

    Frequent, high-precision geodetic data have temporally correlated errors. Temporal correlations directly affect both the estimate of rate and its standard error; the rate of deformation is a key product from geodetic measurements made in tectonically active areas. Various models of temporally correlated errors are developed and these provide relations between the power spectral density and the data covariance matrix. These relations are applied to two-color electronic distance meter (EDM) measurements made frequently in California over the past 15-20 years. Previous analysis indicated that these data have significant random walk error. Analysis using the noise models developed here indicates that the random walk model is valid for about 30% of the data. A second 30% of the data can be better modeled with power law noise with a spectral index between 1 and 2, while another 30% of the data can be modeled with a combination of band-pass-filtered plus random walk noise. The remaining 10% of the data can be best modeled as a combination of band-pass-filtered plus power law noise. This band-pass-filtered noise is a product of an annual cycle that leaks into adjacent frequency bands. For time spans of more than 1 year these more complex noise models indicate that the precision in rate estimates is better than that inferred by just the simpler, random walk model of noise.

  10. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  11. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  12. The Matrix Analogies Test: A Validity Study with the K-ABC.

    ERIC Educational Resources Information Center

    Smith, Douglas K.

    The Matrix Analogies Test-Expanded Form (MAT-EF) and Kaufman Assessment Battery for Children (K-ABC) were administered in counterbalanced order to two randomly selected samples of students in grades 2 through 5. The MAT-EF was recently developed to measure non-verbal reasoning. The samples included 26 non-handicapped second graders in a rural…

  13. Microstructure of the IMF turbulences at 2.5 AU

    NASA Technical Reports Server (NTRS)

    Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.

    1995-01-01

    A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.

  14. Platelet-rich fibrin matrix in the management of arthroscopic repair of the rotator cuff: a prospective, randomized, double-blinded study.

    PubMed

    Weber, Stephen C; Kauffman, Jeffrey I; Parise, Carol; Weber, Sophia J; Katz, Stephen D

    2013-02-01

    Arthroscopic rotator cuff repair has a high rate of patient satisfaction. However, multiple studies have shown significant rates of anatomic failure. Biological augmentation would seem to be a reasonable technique to improve clinical outcomes and healing rates. To represent a prospective, double-blinded, randomized study to assess the use of platelet-rich fibrin matrix (PRFM) in rotator cuff surgery. Randomized controlled trial; level of evidence, 1. Prestudy power analysis demonstrated that a sample size of 30 patients in each group (PRFM vs control) would allow recognition of a 20% difference in perioperative pain scores. Sixty consecutive patients were randomized to either receive a commercially available PRFM product or not. Preoperative and postoperative range of motion (ROM), University of California-Los Angeles (UCLA), and simple shoulder test (SST) scores were recorded. Surgery was performed using an arthroscopic single-row technique. Visual analog scale (VAS) pain scores were obtained upon arrival to the recovery room and 1 hour postoperatively, and narcotic consumption was recorded and converted to standard narcotic equivalents. The SST and ROM measurements were taken at 3, 6, 9, and 12 weeks postoperatively, and final (1 year) American shoulder and elbow surgeons (ASES) shoulder and UCLA shoulder scores were assessed. There were no complications. Randomization created comparable groups except that the PRFM group was younger than the control group (mean ± SD, 59.67 ± 8.16 y vs 64.50 ± 8.59 y, respectively; P < .05). Mean surgery time was longer for the PRFM group than for the control group (83.28 ± 17.13 min vs 73.28 ± 17.18 min, respectively; P < .02). There was no significant difference in VAS scores or narcotic use between groups and no statistically significant differences in recovery of motion, SST, or ASES scores. Mean ASES scores were 82.48 ± 8.77 (PRFM group) and 82.52 ± 12.45 (controls) (F(1,56) = 0.00, P > .98). Mean UCLA shoulder scores were 27.94 ± 4.98 for the PRFM group versus 29.59 ± 1.68 for the controls (P < .046). Structural results correlated with age and size of the tear and did not differ between the groups. Platelet-rich fibrin matrix was not shown to significantly improve perioperative morbidity, clinical outcomes, or structural integrity. While longer term follow-up or different platelet-rich plasma formulations may show differences, early follow-up does not show significant improvement in perioperative morbidity, structural integrity, or clinical outcome.

  15. An analysis of waves in stochastic layered media using a transition matrix method

    NASA Astrophysics Data System (ADS)

    Kotulski, Zbigniew

    This thesis is the result of several years of work by the author. The research was also the basis for several publications from 1989 to 1992 on wave propagation in randomly structured layered media. At the time the author was employed at the Institute of Basic Problems of Technology of the Polish Academy of Sciences in Warsaw, where he worked as a member of a team led by Professor Kazimierz Sobczyk. He also spent a year at the Institute of Applied Mathematics at Heidelburg University on a research stipend from the Humboldt Foundation and worked with Professor Herman Rost. In writing the last publication used in the thesis, the author also received financial support from the Scientific Research Committee under Individual Grant No 3 0941 91 01 entitled 'Wave Impulses in Structural Members with Random Properties'.

  16. CMV matrices in random matrix theory and integrable systems: a survey

    NASA Astrophysics Data System (ADS)

    Nenciu, Irina

    2006-07-01

    We present a survey of recent results concerning a remarkable class of unitary matrices, the CMV matrices. We are particularly interested in the role they play in the theory of random matrices and integrable systems. Throughout the paper we also emphasize the analogies and connections to Jacobi matrices.

  17. The cosmic microwave background radiation power spectrum as a random bit generator for symmetric- and asymmetric-key cryptography.

    PubMed

    Lee, Jeffrey S; Cleaver, Gerald B

    2017-10-01

    In this note, the Cosmic Microwave Background (CMB) Radiation is shown to be capable of functioning as a Random Bit Generator, and constitutes an effectively infinite supply of truly random one-time pad values of arbitrary length. It is further argued that the CMB power spectrum potentially conforms to the FIPS 140-2 standard. Additionally, its applicability to the generation of a (n × n) random key matrix for a Vernam cipher is established.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forrester, Peter J., E-mail: p.forrester@ms.unimelb.edu.au; Thompson, Colin J.

    The Golden-Thompson inequality, Tr (e{sup A+B}) ⩽ Tr (e{sup A}e{sup B}) for A, B Hermitian matrices, appeared in independent works by Golden and Thompson published in 1965. Both of these were motivated by considerations in statistical mechanics. In recent years the Golden-Thompson inequality has found applications to random matrix theory. In this article, we detail some historical aspects relating to Thompson's work, giving in particular a hitherto unpublished proof due to Dyson, and correspondence with Pólya. We show too how the 2 × 2 case relates to hyperbolic geometry, and how the original inequality holds true with the trace operation replaced bymore » any unitarily invariant norm. In relation to the random matrix applications, we review its use in the derivation of concentration type lemmas for sums of random matrices due to Ahlswede-Winter, and Oliveira, generalizing various classical results.« less

  19. Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, David O.

    2007-01-01

    A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less

  20. Money creation process in a random redistribution model

    NASA Astrophysics Data System (ADS)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  1. A short-term and long-term comparison of root coverage with an acellular dermal matrix and a subepithelial graft.

    PubMed

    Harris, Randall J

    2004-05-01

    Obtaining predictable and esthetic root coverage has become important. Unfortunately, there is only a limited amount of information available on the long-term results of root coverage procedures. The goal of this study was to evaluate the short-term and long-term root coverage results obtained with an acellular dermal matrix and a subepithelial graft. An a priori power analysis was done to determine that 25 was an adequate sample size for each group in this study. Twenty-five patients treated with either an acellular dermal matrix or a subepithelial graft for root coverage were included in this study. The short-term (mean 12.3 to 13.2 weeks) and long-term (mean 48.1 to 49.2 months) results were compared. Additionally, various factors were evaluated to determine whether they could affect the results. This study was a retrospective study of patients in a fee-for-service private periodontal practice. The patients were not randomly assigned to treatment groups. The mean root coverages for the short-term acellular dermal matrix (93.4%), short-term subepithelial graft (96.6%), and long-term subepithelial graft (97.0%) were statistically similar. All three were statistically greater than the long-term acellular dermal matrix mean root coverage (65.8%). Similar results were noted in the change in recession. There were smaller probing reductions and less of an increase in keratinized tissue with the acellular dermal matrix than the subepithelial graft. None of the factors evaluated resulted in the acellular dermal graft having a statistically significant better result than the subepithelial graft. However, in long-term cases where multiple defects were treated with an acellular dermal matrix, the mean root coverage (70.8%) was greater than the mean root coverage in long-term cases where a single defect was treated with an acellular dermal matrix (50.0%). The mean results with the subepithelial graft held up with time better than the mean results with an acellular dermal matrix. However, the results were not universal. In 32.0% of the cases treated with an acellular dermal matrix, the results improved or remained stable with time.

  2. Random density matrices versus random evolution of open system

    NASA Astrophysics Data System (ADS)

    Pineda, Carlos; Seligman, Thomas H.

    2015-10-01

    We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.

  3. Free Vibration of Uncertain Unsymmetrically Laminated Beams

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Goyal, Vijay K.

    2001-01-01

    Monte Carlo Simulation and Stochastic FEA are used to predict randomness in the free vibration response of thin unsymmetrically laminated beams. For the present study, it is assumed that randomness in the response is only caused by uncertainties in the ply orientations. The ply orientations may become random or uncertain during the manufacturing process. A new 16-dof beam element, based on the first-order shear deformation beam theory, is used to study the stochastic nature of the natural frequencies. Using variational principles, the element stiffness matrix and mass matrix are obtained through analytical integration. Using a random sequence a large data set is generated, containing possible random ply-orientations. This data is assumed to be symmetric. The stochastic-based finite element model for free vibrations predicts the relation between the randomness in fundamental natural frequencies and the randomness in ply-orientation. The sensitivity derivatives are calculated numerically through an exact formulation. The squared fundamental natural frequencies are expressed in terms of deterministic and probabilistic quantities, allowing to determine how sensitive they are to variations in ply angles. The predicted mean-valued fundamental natural frequency squared and the variance of the present model are in good agreement with Monte Carlo Simulation. Results, also, show that variations between plus or minus 5 degrees in ply-angles can affect free vibration response of unsymmetrically and symmetrically laminated beams.

  4. Comparison of the compressive strengths for stitched and toughened composite systems

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    1994-01-01

    The compression strength of a stitched and a toughened matrix graphite/epoxy composite was determined and compared to a baseline unstitched untoughened composite. Two different layups with a variety of test lengths were tested under both ambient and hot/wet conditions. No significant difference in strength was seen for the different materials when the gage lengths of the specimens were long enough to lead to a buckling failure. For shorter specimens, a 30 percent reduction in strength from the baseline was seen due to stitching for both a 48-ply quasi-isotropic and a (0/45/0/-45/90/-45/0/45/0)s laminate. Analysis of the results suggested that the decrease in strength was due to increased fiber misalignment due to the stitches. An observed increasing strength with decreasing gage length, which was seen for all materials, was explained with a size effect model. The model assumed a random distribution of flaws (misaligned fibers). The toughened materials showed a small increase in strength over the baseline material for both laminates presumably due to the compensating effects of a more compliant matrix and straighter fibers in the toughened material. The hot/wet strength of the stitched and baseline material fell 30 percent below their ambient strengths for shorter, nonbuckling specimen, while the strength of the toughened matrix material only fell 20 percent. Video images of the failing specimen were recorded and showed local failures prior to global collapse of the specimen. These images support the theory of a random distribution of flaws controlling composite failure. Failed specimen appearance, however, seems to be a misleading indication of the cause of failure.

  5. Embedded random matrix ensembles from nuclear structure and their recent applications

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.; Chavda, N. D.

    Embedded random matrix ensembles generated by random interactions (of low body rank and usually two-body) in the presence of a one-body mean field, introduced in nuclear structure physics, are now established to be indispensable in describing statistical properties of a large number of isolated finite quantum many-particle systems. Lie algebra symmetries of the interactions, as identified from nuclear shell model and the interacting boson model, led to the introduction of a variety of embedded ensembles (EEs). These ensembles with a mean field and chaos generating two-body interaction generate in three different stages, delocalization of wave functions in the Fock space of the mean-field basis states. The last stage corresponds to what one may call thermalization and complex nuclei, as seen from many shell model calculations, lie in this region. Besides briefly describing them, their recent applications to nuclear structure are presented and they are (i) nuclear level densities with interactions; (ii) orbit occupancies; (iii) neutrinoless double beta decay nuclear transition matrix elements as transition strengths. In addition, their applications are also presented briefly that go beyond nuclear structure and they are (i) fidelity, decoherence, entanglement and thermalization in isolated finite quantum systems with interactions; (ii) quantum transport in disordered networks connected by many-body interactions with centrosymmetry; (iii) semicircle to Gaussian transition in eigenvalue densities with k-body random interactions and its relation to the Sachdev-Ye-Kitaev (SYK) model for majorana fermions.

  6. JaSTA-2: Second version of the Java Superposition T-matrix Application

    NASA Astrophysics Data System (ADS)

    Halder, Prithish; Das, Himadri Sekhar

    2017-12-01

    In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.

  7. Fluctuation-dissipation theory of input-output interindustrial relations

    NASA Astrophysics Data System (ADS)

    Iyetomi, Hiroshi; Nakayama, Yasuhiro; Aoyama, Hideaki; Fujiwara, Yoshi; Ikeda, Yuichi; Souma, Wataru

    2011-01-01

    In this study, the fluctuation-dissipation theory is invoked to shed light on input-output interindustrial relations at a macroscopic level by its application to indices of industrial production (IIP) data for Japan. Statistical noise arising from finiteness of the time series data is carefully removed by making use of the random matrix theory in an eigenvalue analysis of the correlation matrix; as a result, two dominant eigenmodes are detected. Our previous study successfully used these two modes to demonstrate the existence of intrinsic business cycles. Here a correlation matrix constructed from the two modes describes genuine interindustrial correlations in a statistically meaningful way. Furthermore, it enables us to quantitatively discuss the relationship between shipments of final demand goods and production of intermediate goods in a linear response framework. We also investigate distinctive external stimuli for the Japanese economy exerted by the current global economic crisis. These stimuli are derived from residuals of moving-average fluctuations of the IIP remaining after subtracting the long-period components arising from inherent business cycles. The observation reveals that the fluctuation-dissipation theory is applicable to an economic system that is supposed to be far from physical equilibrium.

  8. Study of free edge effect on sub-laminar scale for thermoplastic composite laminates

    NASA Astrophysics Data System (ADS)

    Shen, Min; Lu, Huanbao; Tong, Jingwei; Su, Yishi; Li, Hongqi; Lv, Yongmin

    2008-11-01

    The interlaminar deformation on the free edge surface in thermoplastic composite AS4/PEEK laminates under bending loading are studied by means of digital image correlation method (DICM) using a white-light industrial microscopic. During the test, any artificial stochastic spray is not applied to the specimen surface. In laminar scale, the interlaminare displacements of [0/90]3s laminate are measured. In sub-laminar scale, the tested area includes a limited number of fibers; the fiber is elastic with actual diameter about 7μm, and PEEK matrix has elastic-plastic behavior. The local mesoscopic fields of interlaminar displacement near the areas of fiber-matrix interface are obtained by DICM. The distributions of in-plane elastic-plastic stresses near the interlaminar interface between different layers are indirectly obtained using the coupling the results of DICM with finite element method. Based on above DICM experiments, the influences of random fiber distribution and the PEEK matrix ductility in sub-laminar scale on the ineterlaminar mesomechanical behavior are investigated. The experimental results in the present work are important for multi-scale theory and numerical analysis of interlaminar deformation and stresses in these composite laminates.

  9. Evaluating adhesion reduction efficacy of type I/III collagen membrane and collagen-GAG resorbable matrix in primary flexor tendon repair in a chicken model.

    PubMed

    Turner, John B; Corazzini, Rubina L; Butler, Timothy J; Garlick, David S; Rinker, Brian D

    2015-09-01

    Reduction of peritendinous adhesions after injury and repair has been the subject of extensive prior investigation. The application of a circumferential barrier at the repair site may limit the quantity of peritendinous adhesions while preserving the tendon's innate ability to heal. The authors compare the effectiveness of a type I/III collagen membrane and a collagen-glycosaminoglycan (GAG) resorbable matrix in reducing tendon adhesions in an experimental chicken model of a "zone II" tendon laceration and repair. In Leghorn chickens, flexor tendons were sharply divided using a scalpel and underwent repair in a standard fashion (54 total repairs). The sites were treated with a type I/III collagen membrane, collagen-GAG resorbable matrix, or saline in a randomized fashion. After 3 weeks, qualitative and semiquantitative histological analysis was performed to evaluate the "extent of peritendinous adhesions" and "nature of tendon healing." The data was evaluated with chi-square analysis and unpaired Student's t test. For both collagen materials, there was a statistically significant improvement in the degree of both extent of peritendinous adhesions and nature of tendon healing relative to the control group. There was no significant difference seen between the two materials. There was one tendon rupture observed in each treatment group. Surgical handling characteristics were subjectively favored for type I/III collagen membrane over the collagen-GAG resorbable matrix. The ideal method of reducing clinically significant tendon adhesions after injury remains elusive. Both materials in this study demonstrate promise in reducing tendon adhesions after flexor tendon repair without impeding tendon healing in this model.

  10. Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices

    NASA Astrophysics Data System (ADS)

    Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.

    2017-08-01

    We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.

  11. Condition for invariant spectrum of an electromagnetic wave scattered from an anisotropic random media.

    PubMed

    Li, Jia; Wu, Pinghui; Chang, Liping

    2015-08-24

    Within the accuracy of the first-order Born approximation, sufficient conditions are derived for the invariance of spectrum of an electromagnetic wave, which is generated by the scattering of an electromagnetic plane wave from an anisotropic random media. We show that the following restrictions on properties of incident fields and the anisotropic media must be simultaneously satisfied: 1) the elements of the dielectric susceptibility matrix of the media must obey the scaling law; 2) the spectral components of the incident field are proportional to each other; 3) the second moments of the elements of the dielectric susceptibility matrix of the media are inversely proportional to the frequency.

  12. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  13. Thermal modelling of normal distributed nanoparticles through thickness in an inorganic material matrix

    NASA Astrophysics Data System (ADS)

    Latré, S.; Desplentere, F.; De Pooter, S.; Seveno, D.

    2017-10-01

    Nanoscale materials showing superior thermal properties have raised the interest of the building industry. By adding these materials to conventional construction materials, it is possible to decrease the total thermal conductivity by almost one order of magnitude. This conductivity is mainly influenced by the dispersion quality within the matrix material. At the industrial scale, the main challenge is to control this dispersion to reduce or even eliminate thermal bridges. This allows to reach an industrially relevant process to balance out the high material cost and their superior thermal insulation properties. Therefore, a methodology is required to measure and describe these nanoscale distributions within the inorganic matrix material. These distributions are either random or normally distributed through thickness within the matrix material. We show that the influence of these distributions is meaningful and modifies the thermal conductivity of the building material. Hence, this strategy will generate a thermal model allowing to predict the thermal behavior of the nanoscale particles and their distributions. This thermal model will be validated by the hot wire technique. For the moment, a good correlation is found between the numerical results and experimental data for a randomly distributed form of nanoparticles in all directions.

  14. Matrix Analysis of the Digital Divide in eHealth Services Using Awareness, Want, and Adoption Gap

    PubMed Central

    2012-01-01

    Background The digital divide usually refers to access or usage, but some studies have identified two other divides: awareness and demand (want). Given that the hierarchical stages of the innovation adoption process of a customer are interrelated, it is necessary and meaningful to analyze the digital divide in eHealth services through three main stages, namely, awareness, want, and adoption. Objective By following the three main integrated stages of the innovation diffusion theory, from the customer segment viewpoint, this study aimed to propose a new matrix analysis of the digital divide using the awareness, want, and adoption gap ratio (AWAG). I compared the digital divide among different groups. Furthermore, I conducted an empirical study on eHealth services to present the practicability of the proposed methodology. Methods Through a review and discussion of the literature, I proposed hypotheses and a new matrix analysis. To test the proposed method, 3074 Taiwanese respondents, aged 15 years and older, were surveyed by telephone. I used the stratified simple random sampling method, with sample size allocation proportioned by the population distribution of 23 cities and counties (strata). Results This study proposed the AWAG segment matrix to analyze the digital divide in eHealth services. First, awareness and want rates were divided into two levels at the middle point of 50%, and then the 2-dimensional cross of the awareness and want segment matrix was divided into four categories: opened group, desire-deficiency group, perception-deficiency group, and closed group. Second, according to the degrees of awareness and want, each category was further divided into four subcategories. I also defined four possible strategies, namely, hold, improve, evaluate, and leave, for different regions in the proposed matrix. An empirical test on two recently promoted eHealth services, the digital medical service (DMS) and the digital home care service (DHCS), was conducted. Results showed that for both eHealth services, the digital divides of awareness, want, and adoption existed across demographic variables, as well as between computer owners and nonowners, and between Internet users and nonusers. With respect to the analysis of the AWAG segment matrix for DMS, most of the segments, except for people with marriage status of Other or without computers, were positioned in the opened group. With respect to DHCS, segments were separately positioned in the opened, perception-deficiency, and closed groups. Conclusions Adoption does not closely follow people’s awareness or want, and a huge digital divide in adoption exists in DHS and DHCS. Thus, a strategy to promote adoption should be used for most demographic segments. PMID:22329958

  15. Design of a factorial experiment with randomization restrictions to assess medical device performance on vascular tissue.

    PubMed

    Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F

    2011-05-20

    Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.

  16. Comprehensive T-Matrix Reference Database: A 2007-2009 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Zakharova, Nadia T.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas

    2010-01-01

    The T-matrix method is among the most versatile, efficient, and widely used theoretical techniques for the numerically exact computation of electromagnetic scattering by homogeneous and composite particles, clusters of particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of T-matrix publications compiled by us previously and includes the publications that appeared since 2007. It also lists several earlier publications not included in the original database.

  17. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  18. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    PubMed Central

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  19. Sequential time interleaved random equivalent sampling for repetitive signal.

    PubMed

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  20. Fatigue loading history reconstruction based on the rain-flow technique

    NASA Technical Reports Server (NTRS)

    Khosrovaneh, A. K.; Dowling, N. E.

    1989-01-01

    Methods are considered for reducing a non-random fatigue loading history to a concise description and then for reconstructing a time history similar to the original. In particular, three methods of reconstruction based on a rain-flow cycle counting matrix are presented. A rain-flow matrix consists of the numbers of cycles at various peak and valley combinations. Two methods are based on a two dimensional rain-flow matrix, and the third on a three dimensional rain-flow matrix. Histories reconstructed by any of these methods produce a rain-flow matrix identical to that of the original history, and as a result the resulting time history is expected to produce a fatigue life similar to that for the original. The procedures described allow lengthy loading histories to be stored in compact form.

  1. Analytical quality assurance in veterinary drug residue analysis methods: matrix effects determination and monitoring for sulfonamides analysis.

    PubMed

    Hoff, Rodrigo Barcellos; Rübensam, Gabriel; Jank, Louise; Barreto, Fabiano; Peralba, Maria do Carmo Ruaro; Pizzolato, Tânia Mara; Silvia Díaz-Cruz, M; Barceló, Damià

    2015-01-01

    In residue analysis of veterinary drugs in foodstuff, matrix effects are one of the most critical points. This work present a discuss considering approaches used to estimate, minimize and monitoring matrix effects in bioanalytical methods. Qualitative and quantitative methods for estimation of matrix effects such as post-column infusion, slopes ratios analysis, calibration curves (mathematical and statistical analysis) and control chart monitoring are discussed using real data. Matrix effects varying in a wide range depending of the analyte and the sample preparation method: pressurized liquid extraction for liver samples show matrix effects from 15.5 to 59.2% while a ultrasound-assisted extraction provide values from 21.7 to 64.3%. The matrix influence was also evaluated: for sulfamethazine analysis, losses of signal were varying from -37 to -96% for fish and eggs, respectively. Advantages and drawbacks are also discussed considering a workflow for matrix effects assessment proposed and applied to real data from sulfonamides residues analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  3. Random matrix approach to group correlations in development country financial market

    NASA Astrophysics Data System (ADS)

    Qohar, Ulin Nuha Abdul; Lim, Kyuseong; Kim, Soo Yong; Liong, The Houw; Purqon, Acep

    2015-12-01

    Financial market is a borderless economic activity, everyone in this world has the right to participate in stock transactions. The movement of stocks is interesting to be discussed in various sciences, ranging from economists to mathe-maticians try to explain and predict the stock movement. Econophysics, as a discipline that studies the economic behavior using one of the methods in particle physics to explain stock movement. Stocks tend to be unpredictable probabilistic regarded as a probabilistic particle. Random Matrix Theory is one method used to analyze probabilistic particle is used to analyze the characteristics of the movement in the stock collection of developing country stock market shares of the correlation matrix. To obtain the characteristics of the developing country stock market and use characteristics of stock markets of developed countries as a parameter for comparison. The result shows market wide effect is not happened in Philipine market and weak in Indonesia market. Contrary, developed country (US) has strong market wide effect.

  4. Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.

    2014-01-01

    The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.

  5. Coherent Patterns in Nuclei and in Financial Markets

    NASA Astrophysics Data System (ADS)

    DroŻdŻ, S.; Kwapień, J.; Speth, J.

    2010-07-01

    In the area of traditional physics the atomic nucleus belongs to the most complex systems. It involves essentially all elements that characterize complexity including the most distinctive one whose essence is a permanent coexistence of coherent patterns and of randomness. From a more interdisciplinary perspective, these are the financial markets that represent an extreme complexity. Here, based on the matrix formalism, we set some parallels between several characteristics of complexity in the above two systems. We, in particular, refer to the concept—historically originating from nuclear physics considerations—of the random matrix theory and demonstrate its utility in quantifying characteristics of the coexistence of chaos and collectivity also for the financial markets. In this later case we show examples that illustrate mapping of the matrix formulation into the concepts originating from the graph theory. Finally, attention is drawn to some novel aspects of the financial coherence which opens room for speculation if analogous effects can be detected in the atomic nuclei or in other strongly interacting Fermi systems.

  6. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  7. Randomized placebo controlled blinded study to assess valsartan efficacy in preventing left ventricle remodeling in patients with dual chamber pacemaker--Rationale and design of the trial.

    PubMed

    Tomasik, Andrzej; Jacheć, Wojciech; Wojciechowska, Celina; Kawecki, Damian; Białkowska, Beata; Romuk, Ewa; Gabrysiak, Artur; Birkner, Ewa; Kalarus, Zbigniew; Nowalany-Kozielska, Ewa

    2015-05-01

    Dual chamber pacing is known to have detrimental effect on cardiac performance and heart failure occurring eventually is associated with increased mortality. Experimental studies of pacing in dogs have shown contractile dyssynchrony leading to diffuse alterations in extracellular matrix. In parallel, studies on experimental ischemia/reperfusion injury have shown efficacy of valsartan to inhibit activity of matrix metalloproteinase-9, to increase the activity of tissue inhibitor of matrix metalloproteinase-3 and preserve global contractility and left ventricle ejection fraction. To present rationale and design of randomized blinded trial aimed to assess whether 12 month long administration of valsartan will prevent left ventricle remodeling in patients with preserved left ventricle ejection fraction (LVEF ≥ 40%) and first implantation of dual chamber pacemaker. A total of 100 eligible patients will be randomized into three parallel arms: placebo, valsartan 80 mg/daily and valsartan 160 mg/daily added to previously used drugs. The primary endpoint will be assessment of valsartan efficacy to prevent left ventricle remodeling during 12 month follow-up. We assess patients' functional capacity, blood plasma activity of matrix metalloproteinases and their tissue inhibitors, NT-proBNP, tumor necrosis factor alpha, and Troponin T. Left ventricle function and remodeling is assessed echocardiographically: M-mode, B-mode, tissue Doppler imaging. If valsartan proves effective, it will be an attractive measure to improve long term prognosis in aging population and increasing number of pacemaker recipients. ClinicalTrials.org (NCT01805804). Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Xenogenous Collagen Matrix and/or Enamel Matrix Derivative for Treatment of Localized Gingival Recessions: A Randomized Clinical Trial. Part I: Clinical Outcomes.

    PubMed

    Sangiorgio, João Paulo Menck; Neves, Felipe Lucas da Silva; Rocha Dos Santos, Manuela; França-Grohmann, Isabela Lima; Casarin, Renato Corrêa Viana; Casati, Márcio Zaffalon; Santamaria, Mauro Pedrine; Sallum, Enilson Antonio

    2017-12-01

    Considering xenogeneic collagen matrix (CM) and enamel matrix derivative (EMD) characteristics, it is suggested that their combination could promote superior clinical outcomes in root coverage procedures. Thus, the aim of this parallel, double-masked, dual-center, randomized clinical trial is to evaluate clinical outcomes after treatment of localized gingival recession (GR) by a coronally advanced flap (CAF) combined with CM and/or EMD. Sixty-eight patients presenting one Miller Class I or II GRs were randomly assigned to receive either CAF (n = 17); CAF + CM (n = 17); CAF + EMD (n = 17), or CAF + CM + EMD (n = 17). Recession height, probing depth, clinical attachment level, and keratinized tissue width and thickness were measured at baseline and 90 days and 6 months after surgery. The obtained root coverage was 68.04% ± 24.11% for CAF; 87.20% ± 15.01% for CAF + CM; 88.77% ± 20.66% for CAF + EMD; and 91.59% ± 11.08% for CAF + CM + EMD after 6 months. Groups that received biomaterials showed greater values (P <0.05). Complete root coverage (CRC) for CAF + EMD was 70.59%, significantly superior to CAF alone (23.53%); CAF + CM (52.94%), and CAF + CM + EMD (51.47%) (P <0.05). Keratinized tissue thickness gain was significant only in CM-treated groups (P <0.05). The three approaches are superior to CAF alone for root coverage. EMD provides highest levels of CRC; however, the addition of CM increases gingival thickness. The combination approach does not seem justified.

  9. 3D polarisation speckle as a demonstration of tensor version of the van Cittert-Zernike theorem for stochastic electromagnetic beams

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Zhao, Juan; Hanson, Steen G.; Takeda, Mitsuo; Wang, Wei

    2016-10-01

    Laser speckle has received extensive studies of its basic properties and associated applications. In the majority of research on speckle phenomena, the random optical field has been treated as a scalar optical field, and the main interest has been concentrated on their statistical properties and applications of its intensity distribution. Recently, statistical properties of random electric vector fields referred to as Polarization Speckle have come to attract new interest because of their importance in a variety of areas with practical applications such as biomedical optics and optical metrology. Statistical phenomena of random electric vector fields have close relevance to the theories of speckles, polarization and coherence theory. In this paper, we investigate the correlation tensor for stochastic electromagnetic fields modulated by a depolarizer consisting of a rough-surfaced retardation plate. Under the assumption that the microstructure of the scattering surface on the depolarizer is as fine as to be unresolvable in our observation region, we have derived a relationship between the polarization matrix/coherency matrix for the modulated electric fields behind the rough-surfaced retardation plate and the coherence matrix under the free space geometry. This relation is regarded as entirely analogous to the van Cittert-Zernike theorem of classical coherence theory. Within the paraxial approximation as represented by the ABCD-matrix formalism, the three-dimensional structure of the generated polarization speckle is investigated based on the correlation tensor, indicating a typical carrot structure with a much longer axial dimension than the extent in its transverse dimension.

  10. Volume of the steady-state space of financial flows in a monetary stock-flow-consistent model

    NASA Astrophysics Data System (ADS)

    Hazan, Aurélien

    2017-05-01

    We show that a steady-state stock-flow consistent macro-economic model can be represented as a Constraint Satisfaction Problem (CSP). The set of solutions is a polytope, which volume depends on the constraints applied and reveals the potential fragility of the economic circuit, with no need to study the dynamics. Several methods to compute the volume are compared, inspired by operations research methods and the analysis of metabolic networks, both exact and approximate. We also introduce a random transaction matrix, and study the particular case of linear flows with respect to money stocks.

  11. The Masked Sample Covariance Estimator: An Analysis via the Matrix Laplace Transform

    DTIC Science & Technology

    2012-02-01

    Variables: Suppose that we divide the stock market into disjoint sectors, and we would like to study the interactions among the monthly returns for...vector to conform with the market sectors, and we estimate only the entries in the diagonal blocks. Spatial or Temporal Localization: A simple random model...eαW1A c ] ≤ 4p e−B/2κ 2 = 1 n . Introduce this expression into (4.11) to conclude that E[exp(2θεM xx∗)1A c ] 4 1 n · I. (4.17) 20 RICHARD Y. CHEN

  12. A universal denoising and peak picking algorithm for LC-MS based on matched filtration in the chromatographic time domain.

    PubMed

    Andreev, Victor P; Rejtar, Tomas; Chen, Hsuan-Shen; Moskovets, Eugene V; Ivanov, Alexander R; Karger, Barry L

    2003-11-15

    A new denoising and peak picking algorithm (MEND, matched filtration with experimental noise determination) for analysis of LC-MS data is described. The algorithm minimizes both random and chemical noise in order to determine MS peaks corresponding to sample components. Noise characteristics in the data set are experimentally determined and used for efficient denoising. MEND is shown to enable low-intensity peaks to be detected, thus providing additional useful information for sample analysis. The process of denoising, performed in the chromatographic time domain, does not distort peak shapes in the m/z domain, allowing accurate determination of MS peak centroids, including low-intensity peaks. MEND has been applied to denoising of LC-MALDI-TOF-MS and LC-ESI-TOF-MS data for tryptic digests of protein mixtures. MEND is shown to suppress chemical and random noise and baseline fluctuations, as well as filter out false peaks originating from the matrix (MALDI) or mobile phase (ESI). In addition, MEND is shown to be effective for protein expression analysis by allowing selection of a large number of differentially expressed ICAT pairs, due to increased signal-to-noise ratio and mass accuracy.

  13. Covariance Matrix Estimation for Massive MIMO

    NASA Astrophysics Data System (ADS)

    Upadhya, Karthik; Vorobyov, Sergiy A.

    2018-04-01

    We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.

  14. Strategies for vectorizing the sparse matrix vector product on the CRAY XMP, CRAY 2, and CYBER 205

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Partridge, Harry

    1987-01-01

    Large, randomly sparse matrix vector products are important in a number of applications in computational chemistry, such as matrix diagonalization and the solution of simultaneous equations. Vectorization of this process is considered for the CRAY XMP, CRAY 2, and CYBER 205, using a matrix of dimension of 20,000 with from 1 percent to 6 percent nonzeros. Efficient scatter/gather capabilities add coding flexibility and yield significant improvements in performance. For the CYBER 205, it is shown that minor changes in the IO can reduce the CPU time by a factor of 50. Similar changes in the CRAY codes make a far smaller improvement.

  15. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  16. Reverse remodeling is associated with changes in extracellular matrix proteases and tissue inhibitors after mesenchymal stem cell (MSC) treatment of pressure overload hypertrophy.

    PubMed

    Molina, Ezequiel J; Palma, Jon; Gupta, Dipin; Torres, Denise; Gaughan, John P; Houser, Steven; Macha, Mahender

    2009-02-01

    Changes in ventricular extracellular matrix (ECM) composition of pressure overload hypertrophy determine clinical outcomes. The effects of mesenchymal stem cell (MSC) transplantation upon determinants of ECM composition in pressure overload hypertrophy have not been studied. Sprague-Dawley rats underwent aortic banding and were followed by echocardiography. After an absolute decrease in fractional shortening of 25% from baseline, 1 x 10(6) MSC (n = 28) or PBS (n = 20) was randomly injected intracoronarily. LV protein analysis, including matrix metalloproteinases (MMP-2, MMP-3, MMP-6, MMP-9) and tissue inhibitors of metalloproteinases (TIMP-1, TIMP-2, TIMP-3), was performed after sacrifice on postoperative day 7, 14, 21 or 28. Left ventricular levels of MMP-3, MMP-6, MMP-9, TIMP-1 and TIMP-3 were demonstrated to be decreased in the MSC group compared with controls after 28 days. Expression of MMP-2 and TIMP-2 remained relatively stable in both groups. Successful MSCs delivery was confirmed by histological analysis and visualization of labelled MSCs. In this model of pressure overload hypertrophy, intracoronary delivery of MSCs during heart failure was associated with specific changes in determinants of ECM composition. LV reverse remodeling was associated with decreased ventricular levels of MMP-3, MMP-6, MMP-9, TIMP-1 and TIMP-3, which were upregulated in the control group as heart failure progressed. These effects were most significant at 28 days following injection. (c) 2008 John Wiley & Sons, Ltd.

  17. Learning Circulant Sensing Kernels

    DTIC Science & Technology

    2014-03-01

    Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We...scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance...matrices, Tropp et al.[28] de - scribes a random filter for acquiring a signal x̄; Haupt et al.[12] describes a channel estimation problem to identify a

  18. Improving performances of suboptimal greedy iterative biclustering heuristics via localization.

    PubMed

    Erten, Cesim; Sözdinler, Melih

    2010-10-15

    Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.

  19. Random Walk Quantum Clustering Algorithm Based on Space

    NASA Astrophysics Data System (ADS)

    Xiao, Shufen; Dong, Yumin; Ma, Hongyang

    2018-01-01

    In the random quantum walk, which is a quantum simulation of the classical walk, data points interacted when selecting the appropriate walk strategy by taking advantage of quantum-entanglement features; thus, the results obtained when the quantum walk is used are different from those when the classical walk is adopted. A new quantum walk clustering algorithm based on space is proposed by applying the quantum walk to clustering analysis. In this algorithm, data points are viewed as walking participants, and similar data points are clustered using the walk function in the pay-off matrix according to a certain rule. The walk process is simplified by implementing a space-combining rule. The proposed algorithm is validated by a simulation test and is proved superior to existing clustering algorithms, namely, Kmeans, PCA + Kmeans, and LDA-Km. The effects of some of the parameters in the proposed algorithm on its performance are also analyzed and discussed. Specific suggestions are provided.

  20. Analysis on Vertical Scattering Signatures in Forestry with PolInSAR

    NASA Astrophysics Data System (ADS)

    Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen

    2014-11-01

    We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.

  1. Effectiveness of enamel matrix derivative on the clinical and microbiological outcomes following surgical regenerative treatment of peri-implantitis. A randomized controlled trial.

    PubMed

    Isehed, Catrine; Holmlund, Anders; Renvert, Stefan; Svenson, Björn; Johansson, Ingegerd; Lundberg, Pernilla

    2016-10-01

    This randomized clinical trial aimed at comparing radiological, clinical and microbial effects of surgical treatment of peri-implantitis alone or in combination with enamel matrix derivative (EMD). Twenty-six subjects were treated with open flap debridement and decontamination of the implant surfaces with gauze and saline preceding adjunctive EMD or no EMD. Bone level (BL) change was primary outcome and secondary outcomes were changes in pocket depth (PD), plaque, pus, bleeding and the microbiota of the peri-implant biofilm analyzed by the Human Oral Microbe Identification Microarray over a time period of 12 months. In multivariate modelling, increased marginal BL at implant site was significantly associated with EMD, the number of osseous walls in the peri-implant bone defect and a Gram+/aerobic microbial flora, whereas reduced BL was associated with a Gram-/anaerobic microbial flora and presence of bleeding and pus, with a cross-validated predictive capacity (Q(2) ) of 36.4%. Similar, but statistically non-significant, trends were seen for BL, PD, plaque, pus and bleeding in univariate analysis. Adjunctive EMD to surgical treatment of peri-implantitis was associated with prevalence of Gram+/aerobic bacteria during the follow-up period and increased marginal BL 12 months after treatment. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    PubMed

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Molecular analysis of genetic diversity among vine accessions using DNA markers.

    PubMed

    da Costa, A F; Teodoro, P E; Bhering, L L; Tardin, F D; Daher, R F; Campos, W F; Viana, A P; Pereira, M G

    2017-04-13

    Viticulture presents a number of economic and social advantages, such as increasing employment levels and fixing the labor force in rural areas. With the aim of initiating a program of genetic improvement in grapevine from the State University of the state of Rio de Janeiro North Darcy Ribeiro, genetic diversity between 40 genotypes (varieties, rootstock, and species of different subgenera) was evaluated using Random amplified polymorphic DNA (RAPD) molecular markers. We built a matrix of binary data, whereby the presence of a band was assigned as "1" and the absence of a band was assigned as "0." The genetic distance was calculated between pairs of genotypes based on the arithmetic complement from the Jaccard Index. The results revealed the presence of considerable variability in the collection. Analysis of the genetic dissimilarity matrix revealed that the most dissimilar genotypes were Rupestris du Lot and Vitis rotundifolia because they were the most genetically distant (0.5972). The most similar were genotypes 31 (unidentified) and Rupestris du lot, which showed zero distance, confirming the results of field observations. A duplicate was confirmed, consistent with field observations, and a short distance was found between the variety 'Italy' and its mutation, 'Ruby'. The grouping methods used were somewhat concordant.

  4. Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals.

    PubMed

    Hedayatifar, L; Vahabi, M; Jafari, G R

    2011-08-01

    When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.

  5. Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals

    NASA Astrophysics Data System (ADS)

    Hedayatifar, L.; Vahabi, M.; Jafari, G. R.

    2011-08-01

    When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.

  6. Singular Behavior of the Leading Lyapunov Exponent of a Product of Random {2 × 2} Matrices

    NASA Astrophysics Data System (ADS)

    Genovese, Giuseppe; Giacomin, Giambattista; Greenblatt, Rafael Leon

    2017-05-01

    We consider a certain infinite product of random {2 × 2} matrices appearing in the solution of some 1 and 1 + 1 dimensional disordered models in statistical mechanics, which depends on a parameter ɛ > 0 and on a real random variable with distribution {μ}. For a large class of {μ}, we prove the prediction by Derrida and Hilhorst (J Phys A 16:2641, 1983) that the Lyapunov exponent behaves like {C ɛ^{2 α}} in the limit {ɛ \\searrow 0}, where {α \\in (0,1)} and {C > 0} are determined by {μ}. Derrida and Hilhorst performed a two-scale analysis of the integral equation for the invariant distribution of the Markov chain associated to the matrix product and obtained a probability measure that is expected to be close to the invariant one for small {ɛ}. We introduce suitable norms and exploit contractivity properties to show that such a probability measure is indeed close to the invariant one in a sense that implies a suitable control of the Lyapunov exponent.

  7. User-Friendly Tools for Random Matrices: An Introduction

    DTIC Science & Technology

    2012-12-03

    T 2011 , Oliveira 2010, Mackey et al . 2012, ... Joel A. Tropp, User-Friendly Tools for Random Matrices, NIPS, 3 December 2012 47 To learn more... E...the matrix product Y = AΩ 3. Construct an orthonormal basis Q for the range of Y [Ref] Halko –Martinsson–T, SIAM Rev. 2011 . Joel A. Tropp, User-Friendly...concentration inequalities...” with L. Mackey et al .. Submitted 2012. § “User-Friendly Tools for Random Matrices: An Introduction.” 2012. See also

  8. Molecular selection in a unified evolutionary sequence

    NASA Technical Reports Server (NTRS)

    Fox, S. W.

    1986-01-01

    With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.

  9. Horizon in random matrix theory, the Hawking radiation, and flow of cold atoms.

    PubMed

    Franchini, Fabio; Kravtsov, Vladimir E

    2009-10-16

    We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connection between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.

  10. Vertices cannot be hidden from quantum spatial search for almost all random graphs

    NASA Astrophysics Data System (ADS)

    Glos, Adam; Krawiec, Aleksandra; Kukulski, Ryszard; Puchała, Zbigniew

    2018-04-01

    In this paper, we show that all nodes can be found optimally for almost all random Erdős-Rényi G(n,p) graphs using continuous-time quantum spatial search procedure. This works for both adjacency and Laplacian matrices, though under different conditions. The first one requires p=ω (log ^8(n)/n), while the second requires p≥ (1+ɛ )log (n)/n, where ɛ >0. The proof was made by analyzing the convergence of eigenvectors corresponding to outlying eigenvalues in the \\Vert \\cdot \\Vert _∞ norm. At the same time for p<(1-ɛ )log (n)/n, the property does not hold for any matrix, due to the connectivity issues. Hence, our derivation concerning Laplacian matrix is tight.

  11. Convergence of moment expansions for expectation values with embedded random matrix ensembles and quantum chaos

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2003-07-01

    Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.

  12. Global sensitivity analysis of multiscale properties of porous materials

    NASA Astrophysics Data System (ADS)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  13. Cluster structure in the correlation coefficient matrix can be characterized by abnormal eigenvalues

    NASA Astrophysics Data System (ADS)

    Nie, Chun-Xiao

    2018-02-01

    In a large number of previous studies, the researchers found that some of the eigenvalues of the financial correlation matrix were greater than the predicted values of the random matrix theory (RMT). Here, we call these eigenvalues as abnormal eigenvalues. In order to reveal the hidden meaning of these abnormal eigenvalues, we study the toy model with cluster structure and find that these eigenvalues are related to the cluster structure of the correlation coefficient matrix. In this paper, model-based experiments show that in most cases, the number of abnormal eigenvalues of the correlation matrix is equal to the number of clusters. In addition, empirical studies show that the sum of the abnormal eigenvalues is related to the clarity of the cluster structure and is negatively correlated with the correlation dimension.

  14. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  15. Integrated Structural Analysis and Test Program

    NASA Technical Reports Server (NTRS)

    Kaufman, Daniel

    2005-01-01

    An integrated structural-analysis and structure-testing computer program is being developed in order to: Automate repetitive processes in testing and analysis; Accelerate pre-test analysis; Accelerate reporting of tests; Facilitate planning of tests; Improve execution of tests; Create a vibration, acoustics, and shock test database; and Integrate analysis and test data. The software package includes modules pertaining to sinusoidal and random vibration, shock and time replication, acoustics, base-driven modal survey, and mass properties and static/dynamic balance. The program is commanded by use of ActiveX controls. There is minimal need to generate command lines. Analysis or test files are selected by opening a Windows Explorer display. After selecting the desired input file, the program goes to a so-called analysis data process or test data process, depending on the type of input data. The status of the process is given by a Windows status bar, and when processing is complete, the data are reported in graphical, tubular, and matrix form.

  16. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  17. Enhanced decision making through neuroscience

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Jung, TP; Makeig, Scott

    2012-06-01

    We propose to enhance the decision making of pilot, co-pilot teams, over a range of vehicle platforms, with the aid of neuroscience. The goal is to optimize this collaborative decision making interplay in time-critical, stressful situations. We will research and measure human facial expressions, personality typing, and brainwave measurements to help answer questions related to optimum decision-making in group situations. Further, we propose to examine the nature of intuition in this decision making process. The brainwave measurements will be facilitated by a University of California, San Diego (UCSD) developed wireless Electroencephalography (EEG) sensing cap. We propose to measure brainwaves covering the whole head area with an electrode density of N=256, and yet keep within the limiting wireless bandwidth capability of m=32 readouts. This is possible because solving Independent Component Analysis (ICA) and finding the hidden brainwave sources allow us to concentrate selective measurements with an organized sparse source -->s sensing matrix [Φs], rather than the traditional purely random compressive sensing (CS) matrix[Φ].

  18. Comparative analysis of genetic diversity among Indian populations of Scirpophaga incertulas by ISSR-PCR and RAPD-PCR.

    PubMed

    Kumar, L S; Sawant, A S; Gupta, V S; Ranjekar, P K

    2001-10-01

    Genetic variation between 28 Indian populations of the rice pest, Scirpophaga incertulas was evaluated using inter-simple sequence repeats (ISSR)-PCR assay. Nine SSR primers gave rise to 79 amplification products of which 67 were polymorphic. A dendrogram constructed from this data indicates that there is no geographical bias to the clustering and that gene flow between populations appears to be relatively unrestricted, substantiating our earlier conclusion based on the RAPD (random amplified polymorphic DNA) data. The dendrograms obtained using each of these marker systems were poorly correlated with each other as determined by Mantel's test for matrix correlation. Estimates of expected heterozygosity and marker index for each of these marker systems suggests that both these marker systems are equally efficient in determining polymorphisms. Matrix correlation analyses suggest that reliable estimates of genetic variation among the S. incertulas pest populations can be obtained by using RAPDs alone or in combination with ISSRs, but ISSRs alone cannot be used for this purpose.

  19. Ultrasensitive Detection of Shigella Species in Blood and Stool.

    PubMed

    Luo, Jieling; Wang, Jiapeng; Mathew, Anup S; Yau, Siu-Tung

    2016-02-16

    A modified immunosensing system with voltage-controlled signal amplification was used to detect Shigella in stool and blood matrixes at the single-digit CFU level. Inactivated Shigella was spiked in these matrixes and detected directly. The detection was completed in 78 min. Detection limits of 21 CFU/mL and 18 CFU/mL were achieved in stool and blood, respectively, corresponding to 2-7 CFUs immobilized on the detecting electrode. The outcome of the detection of extremely low bacterium concentration, i.e., below 100 CFU/mL, blood samples show a random nature. An analysis of the detection probabilities indicates the correlation between the sample volume and the success of detection and suggests that sample volume is critical for ultrasensitive detection of bacteria. The calculated detection limit is qualitatively in agreement with the empirically determined detection limit. The demonstrated ultrasensitive detection of Shigella on the single-digit CFU level suggests the feasibility of the direct detection of the bacterium in the samples without performing a culture.

  20. Analysis of protein circular dichroism spectra for secondary structure using a simple matrix multiplication.

    PubMed

    Compton, L A; Johnson, W C

    1986-05-15

    Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. The fraction of the each secondary structure in a protein is predicted by forming the dot product of the corresponding inverse CD spectrum, expressed as a vector, with the CD spectrum of the protein digitized in the same way. We show how this method is based on the construction of the generalized inverse from the singular value decomposition of a set of CD spectra corresponding to proteins whose secondary structures are known from X-ray crystallography. These inverse spectra compute secondary structure directly from protein CD spectra without resorting to least-squares fitting and standard matrix inversion techniques. In addition, spectra corresponding to the individual secondary structures, analogous to the CD spectra of synthetic polypeptides, are generated from the five most significant CD eigenvectors.

  1. Power law tails in phylogenetic systems.

    PubMed

    Qin, Chongli; Colwell, Lucy J

    2018-01-23

    Covariance analysis of protein sequence alignments uses coevolving pairs of sequence positions to predict features of protein structure and function. However, current methods ignore the phylogenetic relationships between sequences, potentially corrupting the identification of covarying positions. Here, we use random matrix theory to demonstrate the existence of a power law tail that distinguishes the spectrum of covariance caused by phylogeny from that caused by structural interactions. The power law is essentially independent of the phylogenetic tree topology, depending on just two parameters-the sequence length and the average branch length. We demonstrate that these power law tails are ubiquitous in the large protein sequence alignments used to predict contacts in 3D structure, as predicted by our theory. This suggests that to decouple phylogenetic effects from the interactions between sequence distal sites that control biological function, it is necessary to remove or down-weight the eigenvectors of the covariance matrix with largest eigenvalues. We confirm that truncating these eigenvectors improves contact prediction.

  2. Atom Probe Tomography Analysis of the Distribution of Rhenium in Nickel Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mottura, A.; Warnken, N; Miller, Michael K

    2010-01-01

    Atom probe tomography (APT) is used to characterise the distributions of rhenium in a binary Ni-Re alloy and the nickel-based single-crystal CMSX-4 superalloy. A purpose-built algorithm is developed to quantify the size distribution of solute clusters, and applied to the APT datasets to critique the hypothesis that rhenium is prone to the formation of clusters in these systems. No evidence is found to indicate that rhenium forms solute clusters above the level expected from random fluctuations. In CMSX-4, enrichment of Re is detected in the matrix phase close to the matrix/precipitate ({gamma}/{gamma}{prime}) phase boundaries. Phase field modelling indicates that thismore » is due to the migration of the {gamma}/{gamma}{prime} interface during cooling from the temperature of operation. Thus, neither clustering of rhenium nor interface enrichments can be the cause of the enhancement in high temperature mechanical properties conferred by rhenium alloying.« less

  3. Comprehensive Thematic T-Matrix Reference Database: A 2014-2015 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas

    2015-01-01

    The T-matrix method is one of the most versatile and efficient direct computer solvers of the macroscopic Maxwell equations and is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper is the seventh update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2013. It also lists a number of earlier publications overlooked previously.

  4. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  5. Matrix-assisted laser desorption/ionization time-of-flight vs. fast-atom bombardment and electrospray ionization mass spectrometry in the structural characterization of bacterial poly(3-hydroxyalkanoates).

    PubMed

    Impallomeni, Giuseppe; Ballistreri, Alberto; Carnemolla, Giovanni Marco; Franco, Domenico; Guglielmino, Salvatore P P

    2015-05-15

    Bacterial poly(3-hydroxyalkanoates) (PHAs) are an emergent class of plastic materials available from renewable resources. Their properties are strictly correlated with the comonomeric composition and sequence, which may be determined by various mass spectrometry approaches. In this paper we compare fast-atom bombardment (FAB) and electrospray ionization (ESI) to matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) of partially pyrolyzed samples. We determined the compositions and sequences of the medium-chain-length PHAs (mcl-PHAs) prepared by bacterial fermentation of Pseudomonas aeruginosa ATCC 27853 cultured in media containing fatty acids with 8, 12, 14, 18, and 20 carbon atoms as carbon sources by means of MALDI-TOFMS of pyrolyzates, and compared the results with those obtained by FAB- and ESI-MS in previous studies. MALDI matrices used were 9-aminoacridine (9-AA) and indoleacrylic acid (IAA). MALDI-TOFMS was carried out in negative ion mode when using 9-AA as a matrix, giving a semi-quantitative estimation of the 3-hydroxyacids constituting the PHAs, and in positive mode when using IAA, allowing us, through statistical analysis of the relative intensity of the oligomers generated by pyrolysis, to establish that the polymers obtained are true random copolyesters and not a mixture of homopolymers or copolymers. MALDI-TOFMS in 9-AA and IAA of partial pyrolyzates of mcl-PHAs represents a powerful method for the structural analysis of these materials. In comparison with FAB and ESI, MALDI provided an extended mass range with better sensitivity at higher mass and a faster method of analysis. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Matrix Analysis of Traditional Chinese Medicine Differential Diagnoses in Gulf War Illness.

    PubMed

    Taylor-Swanson, Lisa; Chang, Joe; Schnyer, Rosa; Hsu, Kai-Yin; Schmitt, Beth Ann; Conboy, Lisa A

    2018-03-08

    To qualitatively categorize Traditional Chinese Medicine (TCM) differential diagnoses in a sample of veterans with Gulf War Illness (GWI) pre- and postacupuncture treatment. The authors randomized 104 veterans diagnosed with GWI to a 6-month acupuncture intervention that consisted of either weekly or biweekly individualized acupuncture treatments. TCM differential diagnoses were recorded at baseline and at 6 months. These TCM diagnoses were evaluated using Matrix Analysis to determine co-occurring patterns of excess, deficiency, and channel imbalances. These diagnoses were examined within and between participants to determine patterns of change and to assess stability of TCM diagnoses over time. Frequencies of diagnoses of excess, deficiency, and channel patterns were tabulated. Diagnoses of excess combined with deficiency decreased from 43% at baseline to 39% of the sample at 6 months. Excess+deficiency+channel imbalances decreased from 26% to 17%, while deficiency+channel imbalances decreased from 11% to 4% over the study duration. The authors observed a trend over time of decreased numbers of individuals presenting with all three types of differential diagnosis combinations. This may suggest that fewer people were diagnosed with concurrent excess, deficiency, and channel imbalances and perhaps a lessening in the complexity of their presentation. This is the first published article that organizes and defines TCM differential diagnoses using Matrix Analysis; currently, there are no TCM frameworks for GWI. These findings are preliminary given the sample size and the amount of missing data at 6 months. Characterization of the TCM clinical presentation of veterans suffering from GWI may help us better understand the potential role that East Asian medicine may play in managing veterans with GWI and the design of effective acupuncture treatments based on TCM. The development of a TCM manual for treating GWI is merited.

  7. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  8. Intraday seasonalities and nonstationarity of trading volume in financial markets: Collective features

    PubMed Central

    Graczyk, Michelle B.; Duarte Queirós, Sílvio M.

    2017-01-01

    Employing Random Matrix Theory and Principal Component Analysis techniques, we enlarge our work on the individual and cross-sectional intraday statistical properties of trading volume in financial markets to the study of collective intraday features of that financial observable. Our data consist of the trading volume of the Dow Jones Industrial Average Index components spanning the years between 2003 and 2014. Computing the intraday time dependent correlation matrices and their spectrum of eigenvalues, we show there is a mode ruling the collective behaviour of the trading volume of these stocks whereas the remaining eigenvalues are within the bounds established by random matrix theory, except the second largest eigenvalue which is robustly above the upper bound limit at the opening and slightly above it during the morning-afternoon transition. Taking into account that for price fluctuations it was reported the existence of at least seven significant eigenvalues—and that its autocorrelation function is close to white noise for highly liquid stocks whereas for the trading volume it lasts significantly for more than 2 hours —, our finding goes against any expectation based on those features, even when we take into account the Epps effect. In addition, the weight of the trading volume collective mode is intraday dependent; its value increases as the trading session advances with its eigenversor approaching the uniform vector as well, which corresponds to a soar in the behavioural homogeneity. With respect to the nonstationarity of the collective features of the trading volume we observe that after the financial crisis of 2008 the coherence function shows the emergence of an upset profile with large fluctuations from that year on, a property that concurs with the modification of the average trading volume profile we noted in our previous individual analysis. PMID:28753676

  9. Computational analysis of electrical conduction in hybrid nanomaterials with embedded non-penetrating conductive particles

    NASA Astrophysics Data System (ADS)

    Cai, Jizhe; Naraghi, Mohammad

    2016-08-01

    In this work, a comprehensive multi-resolution two-dimensional (2D) resistor network model is proposed to analyze the electrical conductivity of hybrid nanomaterials made of insulating matrix with conductive particles such as CNT reinforced nanocomposites and thick film resistors. Unlike existing approaches, our model takes into account the impenetrability of the particles and their random placement within the matrix. Moreover, our model presents a detailed description of intra-particle conductivity via finite element analysis, which to the authors’ best knowledge has not been addressed before. The inter-particle conductivity is assumed to be primarily due to electron tunneling. The model is then used to predict the electrical conductivity of electrospun carbon nanofibers as a function of microstructural parameters such as turbostratic domain alignment and aspect ratio. To simulate the microstructure of single CNF, randomly positioned nucleation sites were seeded and grown as turbostratic particles with anisotropic growth rates. Particle growth was in steps and growth of each particle in each direction was stopped upon contact with other particles. The study points to the significant contribution of both intra-particle and inter-particle conductivity to the overall conductivity of hybrid composites. Influence of particle alignment and anisotropic growth rate ratio on electrical conductivity is also discussed. The results show that partial alignment in contrast to complete alignment can result in maximum electrical conductivity of whole CNF. High degrees of alignment can adversely affect conductivity by lowering the probability of the formation of a conductive path. The results demonstrate approaches to enhance electrical conductivity of hybrid materials through controlling their microstructure which is applicable not only to carbon nanofibers, but also many other types of hybrid composites such as thick film resistors.

  10. Intraday seasonalities and nonstationarity of trading volume in financial markets: Collective features.

    PubMed

    Graczyk, Michelle B; Duarte Queirós, Sílvio M

    2017-01-01

    Employing Random Matrix Theory and Principal Component Analysis techniques, we enlarge our work on the individual and cross-sectional intraday statistical properties of trading volume in financial markets to the study of collective intraday features of that financial observable. Our data consist of the trading volume of the Dow Jones Industrial Average Index components spanning the years between 2003 and 2014. Computing the intraday time dependent correlation matrices and their spectrum of eigenvalues, we show there is a mode ruling the collective behaviour of the trading volume of these stocks whereas the remaining eigenvalues are within the bounds established by random matrix theory, except the second largest eigenvalue which is robustly above the upper bound limit at the opening and slightly above it during the morning-afternoon transition. Taking into account that for price fluctuations it was reported the existence of at least seven significant eigenvalues-and that its autocorrelation function is close to white noise for highly liquid stocks whereas for the trading volume it lasts significantly for more than 2 hours -, our finding goes against any expectation based on those features, even when we take into account the Epps effect. In addition, the weight of the trading volume collective mode is intraday dependent; its value increases as the trading session advances with its eigenversor approaching the uniform vector as well, which corresponds to a soar in the behavioural homogeneity. With respect to the nonstationarity of the collective features of the trading volume we observe that after the financial crisis of 2008 the coherence function shows the emergence of an upset profile with large fluctuations from that year on, a property that concurs with the modification of the average trading volume profile we noted in our previous individual analysis.

  11. Comprehensive T-Matrix Reference Database: A 2012 - 2013 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas

    2013-01-01

    The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.

  12. How Fast Can Networks Synchronize? A Random Matrix Theory Approach

    NASA Astrophysics Data System (ADS)

    Timme, Marc; Wolf, Fred; Geisel, Theo

    2004-03-01

    Pulse-coupled oscillators constitute a paradigmatic class of dynamical systems interacting on networks because they model a variety of biological systems including flashing fireflies and chirping crickets as well as pacemaker cells of the heart and neural networks. Synchronization is one of the most simple and most prevailing kinds of collective dynamics on such networks. Here we study collective synchronization [1] of pulse-coupled oscillators interacting on asymmetric random networks. Using random matrix theory we analytically determine the speed of synchronization in such networks in dependence on the dynamical and network parameters [2]. The speed of synchronization increases with increasing coupling strengths. Surprisingly, however, it stays finite even for infinitely strong interactions. The results indicate that the speed of synchronization is limited by the connectivity of the network. We discuss the relevance of our findings to general equilibration processes on complex networks. [5mm] [1] M. Timme, F. Wolf, T. Geisel, Phys. Rev. Lett. 89:258701 (2002). [2] M. Timme, F. Wolf, T. Geisel, cond-mat/0306512 (2003).

  13. Generic dynamical features of quenched interacting quantum systems: Survival probability, density imbalance, and out-of-time-ordered correlator

    NASA Astrophysics Data System (ADS)

    Torres-Herrera, E. J.; García-García, Antonio M.; Santos, Lea F.

    2018-02-01

    We study numerically and analytically the quench dynamics of isolated many-body quantum systems. Using full random matrices from the Gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out-of-time-ordered correlator. They are compared with numerical results for a one-dimensional-disordered model with two-body interactions and shown to bound the decay rate of this realistic system. Power-law decays are seen at intermediate times, and dips below the infinite time averages (correlation holes) occur at long times for all three quantities when the system exhibits level repulsion. The fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium. Assisted by the random matrix analytical results, we propose expressions that describe extremely well the dynamics of the realistic chaotic system at different time scales.

  14. Spiked Models of Large Dimensional Random Matrices Applied to Wireless Communications and Array Signal Processing

    DTIC Science & Technology

    2013-12-14

    population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC

  15. Probabilistic homogenization of random composite with ellipsoidal particle reinforcement by the iterative stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Sokołowski, Damian; Kamiński, Marcin

    2018-01-01

    This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).

  16. Gram-negative and -positive bacteria differentiation in blood culture samples by headspace volatile compound analysis.

    PubMed

    Dolch, Michael E; Janitza, Silke; Boulesteix, Anne-Laure; Graßmann-Lichtenauer, Carola; Praun, Siegfried; Denzer, Wolfgang; Schelling, Gustav; Schubert, Sören

    2016-12-01

    Identification of microorganisms in positive blood cultures still relies on standard techniques such as Gram staining followed by culturing with definite microorganism identification. Alternatively, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry or the analysis of headspace volatile compound (VC) composition produced by cultures can help to differentiate between microorganisms under experimental conditions. This study assessed the efficacy of volatile compound based microorganism differentiation into Gram-negatives and -positives in unselected positive blood culture samples from patients. Headspace gas samples of positive blood culture samples were transferred to sterilized, sealed, and evacuated 20 ml glass vials and stored at -30 °C until batch analysis. Headspace gas VC content analysis was carried out via an auto sampler connected to an ion-molecule reaction mass spectrometer (IMR-MS). Measurements covered a mass range from 16 to 135 u including CO2, H2, N2, and O2. Prediction rules for microorganism identification based on VC composition were derived using a training data set and evaluated using a validation data set within a random split validation procedure. One-hundred-fifty-two aerobic samples growing 27 Gram-negatives, 106 Gram-positives, and 19 fungi and 130 anaerobic samples growing 37 Gram-negatives, 91 Gram-positives, and two fungi were analysed. In anaerobic samples, ten discriminators were identified by the random forest method allowing for bacteria differentiation into Gram-negative and -positive (error rate: 16.7 % in validation data set). For aerobic samples the error rate was not better than random. In anaerobic blood culture samples of patients IMR-MS based headspace VC composition analysis facilitates bacteria differentiation into Gram-negative and -positive.

  17. LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS*

    PubMed Central

    Meng, Xiangrui; Saunders, Michael A.; Mahoney, Michael W.

    2014-01-01

    We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to minx∈ℝn ‖Ax − b‖2, where A ∈ ℝm × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK’s DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster. PMID:25419094

  18. The open quantum Brownian motions

    NASA Astrophysics Data System (ADS)

    Bauer, Michel; Bernard, Denis; Tilloy, Antoine

    2014-09-01

    Using quantum parallelism on random walks as the original seed, we introduce new quantum stochastic processes, the open quantum Brownian motions. They describe the behaviors of quantum walkers—with internal degrees of freedom which serve as random gyroscopes—interacting with a series of probes which serve as quantum coins. These processes may also be viewed as the scaling limit of open quantum random walks and we develop this approach along three different lines: the quantum trajectory, the quantum dynamical map and the quantum stochastic differential equation. We also present a study of the simplest case, with a two level system as an internal gyroscope, illustrating the interplay between the ballistic and diffusive behaviors at work in these processes. Notation H_z : orbital (walker) Hilbert space, {C}^{{Z}} in the discrete, L^2({R}) in the continuum H_c : internal spin (or gyroscope) Hilbert space H_sys=H_z\\otimesH_c : system Hilbert space H_p : probe (or quantum coin) Hilbert space, H_p={C}^2 \\rho^tot_t : density matrix for the total system (walker + internal spin + quantum coins) \\bar \\rho_t : reduced density matrix on H_sys : \\bar\\rho_t=\\int dxdy\\, \\bar\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | \\hat \\rho_t : system density matrix in a quantum trajectory: \\hat\\rho_t=\\int dxdy\\, \\hat\\rho_t(x,y)\\otimes | x \\rangle _z\\langle y | . If diagonal and localized in position: \\hat \\rho_t=\\rho_t\\otimes| X_t \\rangle _z\\langle X_t | ρt: internal density matrix in a simple quantum trajectory Xt: walker position in a simple quantum trajectory Bt: normalized Brownian motion ξt, \\xi_t^\\dagger : quantum noises

  19. Synchronizability of random rectangular graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estrada, Ernesto, E-mail: ernesto.estrada@strath.ac.uk; Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  20. Electroacupuncture Exerts Neuroprotection through Caveolin-1 Mediated Molecular Pathway in Intracerebral Hemorrhage of Rats.

    PubMed

    Li, Hui-Qin; Li, Yan; Chen, Zi-Xian; Zhang, Xiao-Guang; Zheng, Xia-Wei; Yang, Wen-Ting; Chen, Shuang; Zheng, Guo-Qing

    2016-01-01

    Spontaneous intracerebral hemorrhage (ICH) is one of the most devastating types of stroke. Here, we aim to demonstrate that electroacupuncture on Baihui (GV20) exerts neuroprotection for acute ICH possibly via the caveolin-1/matrix metalloproteinase/blood-brain barrier permeability pathway. The model of ICH was established by using collagenase VII. Rats were randomly divided into three groups: Sham-operation group, Sham electroacupuncture group, and electroacupuncture group. Each group was further divided into 4 subgroups according to the time points of 6 h, 1 d, 3 d, and 7 d after ICH. The methods were used including examination of neurological deficit scores according to Longa's scale, measurement of blood-brain barrier permeability through Evans Blue content, in situ immunofluorescent detection of caveolin-1 in brains, western blot analysis of caveolin-1 in brains, and in situ zymography for measuring matrix metalloproteinase-2/9 activity in brains. Compared with Sham electroacupuncture group, electroacupuncture group has resulted in a significant improvement in neurological deficit scores and in a reduction in Evans Blue content, expression of caveolin-1, and activity of matrix metalloproteinase-2/9 at 6 h, 1 d, 3 d, and 7 d after ICH ( P < 0.05). In conclusion, the present results suggested that electroacupuncture on GV20 can improve neurological deficit scores and reduce blood-brain barrier permeability after ICH, and the mechanism possibly targets caveolin-1/matrix metalloproteinase/blood-brain barrier permeability pathway.

  1. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2006-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis - Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  2. A Framework for Performing Multiscale Stochastic Progressive Failure Analysis of Composite Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis-Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.

  3. Xenogeneic collagen matrix for periodontal plastic surgery procedures: a systematic review and meta-analysis.

    PubMed

    Atieh, M A; Alsabeeha, N; Tawse-Smith, A; Payne, A G T

    2016-08-01

    Several clinical trials describe the effectiveness of xenogeneic collagen matrix (XCM) as an alternative option to surgical mucogingival procedures for the treatment of marginal tissue recession and augmentation of insufficient zones of keratinized tissue (KT). The aim of this systematic review and meta-analysis was to evaluate the clinical and patient-centred outcomes of XCM compared to other mucogingival procedures. Applying guidelines of the Preferred Reporting Items for Systematic Reviews and Meta analyses statement, randomized controlled trials were searched for in electronic databases and complemented by hand searching. The risk of bias was assessed using the Cochrane Collaboration's Risk of Bias tool and data were analysed using statistical software. A total of 645 studies were identified, of which, six trials were included with 487 mucogingival defects in 170 participants. Overall meta-analysis showed that connective tissue graft (CTG) in conjunction with the coronally advanced flap (CAF) had a significantly higher percentage of complete/mean root coverage and mean recession reduction than XCM. Insufficient evidence was found to determine any significant differences in width of KT between XCM and CTG. The XCM had a significantly higher mean root coverage, recession reduction and gain in KT compared to CAF alone. No significant differences in patient's aesthetic satisfaction were found between XCM and CTG, except for postoperative morbidity in favour of XCM. Operating time was significantly reduced with the use of XCM compared with CTG but not with CAF alone. There is no evidence to demonstrate the effectiveness of XCM in achieving greater root coverage, recession reduction and gain in KT compared to CTG plus CAF. Superior short-term results in treating root coverage compared with CAF alone are possible. There is limited evidence that XCM may improve aesthetic satisfaction, reduce postoperative morbidity and shorten the operating time. Further long-term randomized controlled trials are required to endorse the supposed advantages of XCM. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Random Matrix Approach to Quantum Adiabatic Evolution Algorithms

    NASA Technical Reports Server (NTRS)

    Boulatov, Alexei; Smelyanskiy, Vadier N.

    2004-01-01

    We analyze the power of quantum adiabatic evolution algorithms (Q-QA) for solving random NP-hard optimization problems within a theoretical framework based on the random matrix theory (RMT). We present two types of the driven RMT models. In the first model, the driving Hamiltonian is represented by Brownian motion in the matrix space. We use the Brownian motion model to obtain a description of multiple avoided crossing phenomena. We show that the failure mechanism of the QAA is due to the interaction of the ground state with the "cloud" formed by all the excited states, confirming that in the driven RMT models. the Landau-Zener mechanism of dissipation is not important. We show that the QAEA has a finite probability of success in a certain range of parameters. implying the polynomial complexity of the algorithm. The second model corresponds to the standard QAEA with the problem Hamiltonian taken from the Gaussian Unitary RMT ensemble (GUE). We show that the level dynamics in this model can be mapped onto the dynamics in the Brownian motion model. However, the driven RMT model always leads to the exponential complexity of the algorithm due to the presence of the long-range intertemporal correlations of the eigenvalues. Our results indicate that the weakness of effective transitions is the leading effect that can make the Markovian type QAEA successful.

  5. Random matrix theory and cross-correlations in global financial indices and local stock market indices

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo

    2013-02-01

    We analyzed cross-correlations between price fluctuations of global financial indices (20 daily stock indices over the world) and local indices (daily indices of 200 companies in the Korean stock market) by using random matrix theory (RMT). We compared eigenvalues and components of the largest and the second largest eigenvectors of the cross-correlation matrix before, during, and after the global financial the crisis in the year 2008. We find that the majority of its eigenvalues fall within the RMT bounds [ λ -, λ +], where λ - and λ + are the lower and the upper bounds of the eigenvalues of random correlation matrices. The components of the eigenvectors for the largest positive eigenvalues indicate the identical financial market mode dominating the global and local indices. On the other hand, the components of the eigenvector corresponding to the second largest eigenvalue are positive and negative values alternatively. The components before the crisis change sign during the crisis, and those during the crisis change sign after the crisis. The largest inverse participation ratio (IPR) corresponding to the smallest eigenvector is higher after the crisis than during any other periods in the global and local indices. During the global financial the crisis, the correlations among the global indices and among the local stock indices are perturbed significantly. However, the correlations between indices quickly recover the trends before the crisis.

  6. Robust reliable sampled-data control for switched systems with application to flight control

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.

    2016-11-01

    This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.

  7. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  8. ``Dressing'' lines and vertices in calculations of matrix elements with the coupled-cluster method and determination of Cs atomic properties

    NASA Astrophysics Data System (ADS)

    Derevianko, Andrei; Porsev, Sergey G.

    2005-03-01

    We consider evaluation of matrix elements with the coupled-cluster method. Such calculations formally involve infinite number of terms and we devise a method of partial summation (dressing) of the resulting series. Our formalism is built upon an expansion of the product C†C of cluster amplitudes C into a sum of n -body insertions. We consider two types of insertions: particle (hole) line insertion and two-particle (two-hole) random-phase-approximation-like insertion. We demonstrate how to “dress” these insertions and formulate iterative equations. We illustrate the dressing equations in the case when the cluster operator is truncated at single and double excitations. Using univalent systems as an example, we upgrade coupled-cluster diagrams for matrix elements with the dressed insertions and highlight a relation to pertinent fourth-order diagrams. We illustrate our formalism with relativistic calculations of the hyperfine constant A(6s) and the 6s1/2-6p1/2 electric-dipole transition amplitude for the Cs atom. Finally, we augment the truncated coupled-cluster calculations with otherwise omitted fourth order diagrams. The resulting analysis for Cs is complete through the fourth order of many-body perturbation theory and reveals an important role of triple and disconnected quadruple excitations.

  9. Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*

    PubMed Central

    Katsevich, E.; Katsevich, A.; Singer, A.

    2015-01-01

    In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132

  10. Not all that glitters is RMT in the forecasting of risk of portfolios in the Brazilian stock market

    NASA Astrophysics Data System (ADS)

    Sandoval, Leonidas; Bortoluzzo, Adriana Bruscato; Venezuela, Maria Kelly

    2014-09-01

    Using stocks of the Brazilian stock exchange (BM&F-Bovespa), we build portfolios of stocks based on Markowitz's theory and test the predicted and realized risks. This is done using the correlation matrices between stocks, and also using Random Matrix Theory in order to clean such correlation matrices from noise. We also calculate correlation matrices using a regression model in order to remove the effect of common market movements and their cleaned versions using Random Matrix Theory. This is done for years of both low and high volatility of the Brazilian stock market, from 2004 to 2012. The results show that the use of regression to subtract the market effect on returns greatly increases the accuracy of the prediction of risk, and that, although the cleaning of the correlation matrix often leads to portfolios that better predict risks, in periods of high volatility of the market this procedure may fail to do so. The results may be used in the assessment of the true risks when one builds a portfolio of stocks during periods of crisis.

  11. Free Fermions and the Classical Compact Groups

    NASA Astrophysics Data System (ADS)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  12. Raney Distributions and Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  13. Spectrum of the Wilson Dirac operator at finite lattice spacings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akemann, G.; Damgaard, P. H.; Splittorff, K.

    2011-04-15

    We consider the effect of discretization errors on the microscopic spectrum of the Wilson Dirac operator using both chiral perturbation theory and chiral random matrix theory. A graded chiral Lagrangian is used to evaluate the microscopic spectral density of the Hermitian Wilson Dirac operator as well as the distribution of the chirality over the real eigenvalues of the Wilson Dirac operator. It is shown that a chiral random matrix theory for the Wilson Dirac operator reproduces the leading zero-momentum terms of Wilson chiral perturbation theory. All results are obtained for a fixed index of the Wilson Dirac operator. The low-energymore » constants of Wilson chiral perturbation theory are shown to be constrained by the Hermiticity properties of the Wilson Dirac operator.« less

  14. Anderson Localization in Quark-Gluon Plasma

    NASA Astrophysics Data System (ADS)

    Kovács, Tamás G.; Pittler, Ferenc

    2010-11-01

    At low temperature the low end of the QCD Dirac spectrum is well described by chiral random matrix theory. In contrast, at high temperature there is no similar statistical description of the spectrum. We show that at high temperature the lowest part of the spectrum consists of a band of statistically uncorrelated eigenvalues obeying essentially Poisson statistics and the corresponding eigenvectors are extremely localized. Going up in the spectrum the spectral density rapidly increases and the eigenvectors become more and more delocalized. At the same time the spectral statistics gradually crosses over to the bulk statistics expected from the corresponding random matrix ensemble. This phenomenon is reminiscent of Anderson localization in disordered conductors. Our findings are based on staggered Dirac spectra in quenched lattice simulations with the SU(2) gauge group.

  15. Horizon in Random Matrix Theory, the Hawking Radiation, and Flow of Cold Atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franchini, Fabio; Kravtsov, Vladimir E.

    2009-10-16

    We propose a Gaussian scalar field theory in a curved 2D metric with an event horizon as the low-energy effective theory for a weakly confined, invariant random matrix ensemble (RME). The presence of an event horizon naturally generates a bath of Hawking radiation, which introduces a finite temperature in the model in a nontrivial way. A similar mapping with a gravitational analogue model has been constructed for a Bose-Einstein condensate (BEC) pushed to flow at a velocity higher than its speed of sound, with Hawking radiation as sound waves propagating over the cold atoms. Our work suggests a threefold connectionmore » between a moving BEC system, black-hole physics and unconventional RMEs with possible experimental applications.« less

  16. Scalable and fault tolerant orthogonalization based on randomized distributed data aggregation

    PubMed Central

    Gansterer, Wilfried N.; Niederbrucker, Gerhard; Straková, Hana; Schulze Grotthoff, Stefan

    2013-01-01

    The construction of distributed algorithms for matrix computations built on top of distributed data aggregation algorithms with randomized communication schedules is investigated. For this purpose, a new aggregation algorithm for summing or averaging distributed values, the push-flow algorithm, is developed, which achieves superior resilience properties with respect to failures compared to existing aggregation methods. It is illustrated that on a hypercube topology it asymptotically requires the same number of iterations as the optimal all-to-all reduction operation and that it scales well with the number of nodes. Orthogonalization is studied as a prototypical matrix computation task. A new fault tolerant distributed orthogonalization method rdmGS, which can produce accurate results even in the presence of node failures, is built on top of distributed data aggregation algorithms. PMID:24748902

  17. Preparation and characterization of an advanced collagen aggregate from porcine acellular dermal matrix.

    PubMed

    Liu, Xinhua; Dan, Nianhua; Dan, Weihua

    2016-07-01

    The objective of this study was to extract and characterize an advanced collagen aggregate (Ag-col) from porcine acellular dermal matrix (pADM). Based on histological examination, scanning electron microscopy (SEM) and atomic force microscope (AFM), Ag-col was composed of the D-periodic cross-striated collagen fibrils and thick collagen fiber bundles with uneven diameters and non-orientated arrangement. Fourier transform infrared (FTIR) spectra of pADM, Ag-col and Col were similar and revealed the presence of the triple helix. Circular dichroism (CD) analysis exhibited a slightly higher content of α-helix but inappreciably less amount of random coil structure in Ag-col compared to Col. Moreover, imino acid contents of pADM, Ag-col and Col were 222.43, 218.30 and 190.01 residues/1000 residues, respectively. From zeta potential analysis, a net charge of zero was found at pH 6.45 and 6.11 for Ag-col and Col, respectively. Differential scanning calorimetry (DSC) study suggested that the Td of Ag-col was 20°C higher than that of Col as expected, and dynamic mechanical analysis (DMA) indicated that Ag-col possessed a higher storage modulus but similar loss factor compared to Col. Therefore, the collagen aggregate from pADM could serve as a better alternative source of collagens for further applications in food and biological industries. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Design of a factorial experiment with randomization restrictions to assess medical device performance on vascular tissue

    PubMed Central

    2011-01-01

    Background Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. Methods The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. Results The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. Conclusions The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance. PMID:21599963

  19. Quantifying economic fluctuations by adapting methods of statistical physics

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki

    2001-09-01

    The first focus of this thesis is the investigation of cross-correlations between the price fluctuations of different stocks using the conceptual framework of random matrix theory (RMT), developed in physics to describe the statistical properties of energy-level spectra of complex nuclei. RMT makes predictions for the statistical properties of matrices that are universal, i.e., do not depend on the interactions between the elements comprising the system. In physical systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system so this framework is of potential value if applied to economic systems. This thesis compares the statistics of cross-correlation matrix C-whose elements Cij are the correlation coefficients of price fluctuations of stock i and j-against the ``null hypothesis'' of a random matrix having the same symmetry properties. It is shown that comparison of the eigenvalue statistics of C with RMT results can be used to distinguish random and non-random parts of C. The non-random part of C which deviates from RMT results, provides information regarding genuine cross-correlations between stocks. The interpretations and potential practical utility of these deviations are also investigated. The second focus is the characterization of the dynamics of stock price fluctuations. The statistical properties of the changes G Δt in price over a time interval Δ t are quantified and the statistical relation between G Δt and the trading activity-measured by the number of transactions NΔ t in the interval Δt is investigated. The statistical properties of the volatility, i.e., the time dependent standard deviation of price fluctuations, is related to two microscopic quantities: NΔt and the variance W2Dt of the price changes for all transactions in the interval Δ t. In addition, the statistical relationship between G Δt and the number of shares QΔt traded in Δ t is investigated.

  20. Generation of physical random numbers by using homodyne detection

    NASA Astrophysics Data System (ADS)

    Hirakawa, Kodai; Oya, Shota; Oguri, Yusuke; Ichikawa, Tsubasa; Eto, Yujiro; Hirano, Takuya; Tsurumaru, Toyohiro

    2016-10-01

    Physical random numbers generated by quantum measurements are, in principle, impossible to predict. We have demonstrated the generation of physical random numbers by using a high-speed balanced photodetector to measure the quadrature amplitudes of vacuum states. Using this method, random numbers were generated at 500 Mbps, which is more than one order of magnitude faster than previously [Gabriel et al:, Nature Photonics 4, 711-715 (2010)]. The Crush test battery of the TestU01 suite consists of 31 tests in 144 variations, and we used them to statistically analyze these numbers. The generated random numbers passed 14 of the 31 tests. To improve the randomness, we performed a hash operation, in which each random number was multiplied by a random Toeplitz matrix; the resulting numbers passed all of the tests in the TestU01 Crush battery.

  1. Comprehensive Thematic T-Matrix Reference Database: A 2015-2017 Update

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Zakharova, Nadezhda; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas

    2017-01-01

    The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.

  2. Comprehensive thematic T-matrix reference database: A 2015-2017 update

    NASA Astrophysics Data System (ADS)

    Mishchenko, Michael I.; Zakharova, Nadezhda T.; Khlebtsov, Nikolai G.; Videen, Gorden; Wriedt, Thomas

    2017-11-01

    The T-matrix method pioneered by Peter C. Waterman is one of the most versatile and efficient numerically exact computer solvers of the time-harmonic macroscopic Maxwell equations. It is widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, periodic structures (including metamaterials), and particles in the vicinity of plane or rough interfaces separating media with different refractive indices. This paper is the eighth update to the comprehensive thematic database of peer-reviewed T-matrix publications initiated in 2004 and lists relevant publications that have appeared since 2015. It also references a small number of earlier publications overlooked previously.

  3. Topological Distances Between Brain Networks

    PubMed Central

    Lee, Hyekyoung; Solo, Victor; Davidson, Richard J.; Pollak, Seth D.

    2018-01-01

    Many existing brain network distances are based on matrix norms. The element-wise differences may fail to capture underlying topological differences. Further, matrix norms are sensitive to outliers. A few extreme edge weights may severely affect the distance. Thus it is necessary to develop network distances that recognize topology. In this paper, we introduce Gromov-Hausdorff (GH) and Kolmogorov-Smirnov (KS) distances. GH-distance is often used in persistent homology based brain network models. The superior performance of KS-distance is contrasted against matrix norms and GH-distance in random network simulations with the ground truths. The KS-distance is then applied in characterizing the multimodal MRI and DTI study of maltreated children.

  4. Protein structure estimation from NMR data by matrix completion.

    PubMed

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  5. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  6. SUNPLIN: Simulation with Uncertainty for Phylogenetic Investigations

    PubMed Central

    2013-01-01

    Background Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. Results In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. Conclusion We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets. PMID:24229408

  7. SUNPLIN: simulation with uncertainty for phylogenetic investigations.

    PubMed

    Martins, Wellington S; Carmo, Welton C; Longo, Humberto J; Rosa, Thierson C; Rangel, Thiago F

    2013-11-15

    Phylogenetic comparative analyses usually rely on a single consensus phylogenetic tree in order to study evolutionary processes. However, most phylogenetic trees are incomplete with regard to species sampling, which may critically compromise analyses. Some approaches have been proposed to integrate non-molecular phylogenetic information into incomplete molecular phylogenies. An expanded tree approach consists of adding missing species to random locations within their clade. The information contained in the topology of the resulting expanded trees can be captured by the pairwise phylogenetic distance between species and stored in a matrix for further statistical analysis. Thus, the random expansion and processing of multiple phylogenetic trees can be used to estimate the phylogenetic uncertainty through a simulation procedure. Because of the computational burden required, unless this procedure is efficiently implemented, the analyses are of limited applicability. In this paper, we present efficient algorithms and implementations for randomly expanding and processing phylogenetic trees so that simulations involved in comparative phylogenetic analysis with uncertainty can be conducted in a reasonable time. We propose algorithms for both randomly expanding trees and calculating distance matrices. We made available the source code, which was written in the C++ language. The code may be used as a standalone program or as a shared object in the R system. The software can also be used as a web service through the link: http://purl.oclc.org/NET/sunplin/. We compare our implementations to similar solutions and show that significant performance gains can be obtained. Our results open up the possibility of accounting for phylogenetic uncertainty in evolutionary and ecological analyses of large datasets.

  8. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  9. Ovine tendon collagen: Extraction, characterisation and fabrication of thin films for tissue engineering applications.

    PubMed

    Fauzi, M B; Lokanathan, Y; Aminuddin, B S; Ruszymah, B H I; Chowdhury, S R

    2016-11-01

    Collagen is the most abundant extracellular matrix (ECM) protein in the human body, thus widely used in tissue engineering and subsequent clinical applications. This study aimed to extract collagen from ovine (Ovis aries) Achilles tendon (OTC), and to evaluate its physicochemical properties and its potential to fabricate thin film with collagen fibrils in a random or aligned orientation. Acid-solubilized protein was extracted from ovine Achilles tendon using 0.35M acetic acid, and 80% of extracted protein was measured as collagen. SDS-PAGE and mass spectrometry analysis revealed the presence of alpha 1 and alpha 2 chain of collagen type I (col I). Further analysis with Fourier transform infrared spectrometry (FTIR), X-ray diffraction (XRD) and energy dispersive X-ray spectroscopy (EDS) confirms the presence of triple helix structure of col I, similar to commercially available rat tail col I. Drying the OTC solution at 37°C resulted in formation of a thin film with randomly orientated collagen fibrils (random collagen film; RCF). Introduction of unidirectional mechanical intervention using a platform rocker prior to drying facilitated the fabrication of a film with aligned orientation of collagen fibril (aligned collagen film; ACF). It was shown that both RCF and ACF significantly enhanced human dermal fibroblast (HDF) attachment and proliferation than that on plastic surface. Moreover, cells were distributed randomly on RCF, but aligned with the direction of mechanical intervention on ACF. In conclusion, ovine tendon could be an alternative source of col I to fabricate scaffold for tissue engineering applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline operations and shows significant improvement in identifying pure endmembers for ground objects with smaller spectrum differences. Therefore, the RKADA could be an alternative for pure endmember extraction from hyperspectral images.

  11. Properties of networks with partially structured and partially random connectivity

    NASA Astrophysics Data System (ADS)

    Ahmadian, Yashar; Fumarola, Francesco; Miller, Kenneth D.

    2015-01-01

    Networks studied in many disciplines, including neuroscience and mathematical biology, have connectivity that may be stochastic about some underlying mean connectivity represented by a non-normal matrix. Furthermore, the stochasticity may not be independent and identically distributed (iid) across elements of the connectivity matrix. More generally, the problem of understanding the behavior of stochastic matrices with nontrivial mean structure and correlations arises in many settings. We address this by characterizing large random N ×N matrices of the form A =M +L J R , where M ,L , and R are arbitrary deterministic matrices and J is a random matrix of zero-mean iid elements. M can be non-normal, and L and R allow correlations that have separable dependence on row and column indices. We first provide a general formula for the eigenvalue density of A . For A non-normal, the eigenvalues do not suffice to specify the dynamics induced by A , so we also provide general formulas for the transient evolution of the magnitude of activity and frequency power spectrum in an N -dimensional linear dynamical system with a coupling matrix given by A . These quantities can also be thought of as characterizing the stability and the magnitude of the linear response of a nonlinear network to small perturbations about a fixed point. We derive these formulas and work them out analytically for some examples of M ,L , and R motivated by neurobiological models. We also argue that the persistence as N →∞ of a finite number of randomly distributed outlying eigenvalues outside the support of the eigenvalue density of A , as previously observed, arises in regions of the complex plane Ω where there are nonzero singular values of L-1(z 1 -M ) R-1 (for z ∈Ω ) that vanish as N →∞ . When such singular values do not exist and L and R are equal to the identity, there is a correspondence in the normalized Frobenius norm (but not in the operator norm) between the support of the spectrum of A for J of norm σ and the σ pseudospectrum of M .

  12. Fermi’s golden rule, the origin and breakdown of Markovian master equations, and the relationship between oscillator baths and the random matrix model

    NASA Astrophysics Data System (ADS)

    Santra, Siddhartha; Cruikshank, Benjamin; Balu, Radhakrishnan; Jacobs, Kurt

    2017-10-01

    Fermi’s golden rule applies to a situation in which a single quantum state \\vert \\psi> is coupled to a near-continuum. This ‘quasi-continuum coupling’ structure results in a rate equation for the population of \\vert \\psi> . Here we show that the coupling of a quantum system to the standard model of a thermal environment, a bath of harmonic oscillators, can be decomposed into a ‘cascade’ made up of the quasi-continuum coupling structures of Fermi’s golden rule. This clarifies the connection between the physics of the golden rule and that of a thermal bath, and provides a non-rigorous but physically intuitive derivation of the Markovian master equation directly from the former. The exact solution to the Hamiltonian of the golden rule, known as the Bixon-Jortner model, generalized for an asymmetric spectrum, provides a window on how the evolution induced by the bath deviates from the master equation as one moves outside the Markovian regime. Our analysis also reveals the relationship between the oscillator bath and the ‘random matrix model’ (RMT) of a thermal bath. We show that the cascade structure is the one essential difference between the two models, and the lack of it prevents the RMT from generating transition rates that are independent of the initial state of the system. We suggest that the cascade structure is one of the generic elements of thermalizing many-body systems.

  13. Incorporation of mesoporous silica nanoparticles into random electrospun PLGA and PLGA/gelatin nanofibrous scaffolds enhances mechanical and cell proliferation properties.

    PubMed

    Mehrasa, Mohammad; Asadollahi, Mohammad Ali; Nasri-Nasrabadi, Bijan; Ghaedi, Kamran; Salehi, Hossein; Dolatshahi-Pirouz, Alireza; Arpanaei, Ayyoob

    2016-09-01

    Poly(lactic-co-glycolic acid) (PLGA) and PLGA/gelatin random nanofibrous scaffolds embedded with different amounts of mesoporous silica nanoparticles (MSNPs) were fabricated using electrospinning method. To evaluate the effects of nanoparticles on the scaffolds, physical, chemical, and mechanical properties as well as in vitro degradation behavior of scaffolds were investigated. The mean diameters of nanofibers were 974±68nm for the pure PLGA scaffolds vs 832±70, 764±80, and 486±64 for the PLGA/gelatin, PLGA/10wt% MSNPs, and the PLGA/gelatin/10wt% MSNPs scaffolds, respectively. The results suggested that the incorporation of gelatin and MSNPs into PLGA-based scaffolds enhances the hydrophilicity of scaffolds due to an increase of hydrophilic functional groups on the surface of nanofibers. With porosity examination, it was concluded that the incorporation of MSNPs and gelatin decrease the porosity of scaffolds. Nanoparticles also improved the tensile mechanical properties of scaffolds. Using in vitro degradation analysis, it was shown that the addition of nanoparticles to the nanofibers matrix increases the weight loss percentage of PLGA-based samples, whereas it decreases the weight loss percentage in the PLGA/gelatin composites. Cultivation of rat pheochromocytoma cell line (PC12), as precursor cells of dopaminergic neural cells, on the scaffolds demonstrated that the introduction of MSNPs into PLGA and PLGA/gelatin matrix leads to improved cell attachment and proliferation and enhances cellular processes. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Gene expression based mouse brain parcellation using Markov random field regularized non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Pathak, Sayan D.; Haynor, David R.; Thompson, Carol L.; Lein, Ed; Hawrylycz, Michael

    2009-02-01

    Understanding the geography of genetic expression in the mouse brain has opened previously unexplored avenues in neuroinformatics. The Allen Brain Atlas (www.brain-map.org) (ABA) provides genome-wide colorimetric in situ hybridization (ISH) gene expression images at high spatial resolution, all mapped to a common three-dimensional 200μm3 spatial framework defined by the Allen Reference Atlas (ARA) and is a unique data set for studying expression based structural and functional organization of the brain. The goal of this study was to facilitate an unbiased data-driven structural partitioning of the major structures in the mouse brain. We have developed an algorithm that uses nonnegative matrix factorization (NMF) to perform parts based analysis of ISH gene expression images. The standard NMF approach and its variants are limited in their ability to flexibly integrate prior knowledge, in the context of spatial data. In this paper, we introduce spatial connectivity as an additional regularization in NMF decomposition via the use of Markov Random Fields (mNMF). The mNMF algorithm alternates neighborhood updates with iterations of the standard NMF algorithm to exploit spatial correlations in the data. We present the algorithm and show the sub-divisions of hippocampus and somatosensory-cortex obtained via this approach. The results are compared with established neuroanatomic knowledge. We also highlight novel gene expression based sub divisions of the hippocampus identified by using the mNMF algorithm.

  15. Euclidean commute time distance embedding and its application to spectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Albano, James A.; Messinger, David W.

    2012-06-01

    Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.

  16. Spatially patterned matrix elasticity directs stem cell fate

    NASA Astrophysics Data System (ADS)

    Yang, Chun; DelRio, Frank W.; Ma, Hao; Killaars, Anouk R.; Basta, Lena P.; Kyburz, Kyle A.; Anseth, Kristi S.

    2016-08-01

    There is a growing appreciation for the functional role of matrix mechanics in regulating stem cell self-renewal and differentiation processes. However, it is largely unknown how subcellular, spatial mechanical variations in the local extracellular environment mediate intracellular signal transduction and direct cell fate. Here, the effect of spatial distribution, magnitude, and organization of subcellular matrix mechanical properties on human mesenchymal stem cell (hMSCs) function was investigated. Exploiting a photodegradation reaction, a hydrogel cell culture substrate was fabricated with regions of spatially varied and distinct mechanical properties, which were subsequently mapped and quantified by atomic force microscopy (AFM). The variations in the underlying matrix mechanics were found to regulate cellular adhesion and transcriptional events. Highly spread, elongated morphologies and higher Yes-associated protein (YAP) activation were observed in hMSCs seeded on hydrogels with higher concentrations of stiff regions in a dose-dependent manner. However, when the spatial organization of the mechanically stiff regions was altered from a regular to randomized pattern, lower levels of YAP activation with smaller and more rounded cell morphologies were induced in hMSCs. We infer from these results that irregular, disorganized variations in matrix mechanics, compared with regular patterns, appear to disrupt actin organization, and lead to different cell fates; this was verified by observations of lower alkaline phosphatase (ALP) activity and higher expression of CD105, a stem cell marker, in hMSCs in random versus regular patterns of mechanical properties. Collectively, this material platform has allowed innovative experiments to elucidate a novel spatial mechanical dosing mechanism that correlates to both the magnitude and organization of spatial stiffness.

  17. Veterinary Medicine and Multi-Omics Research for Future Nutrition Targets: Metabolomics and Transcriptomics of the Common Degenerative Mitral Valve Disease in Dogs.

    PubMed

    Li, Qinghong; Freeman, Lisa M; Rush, John E; Huggins, Gordon S; Kennedy, Adam D; Labuda, Jeffrey A; Laflamme, Dorothy P; Hannah, Steven S

    2015-08-01

    Canine degenerative mitral valve disease (DMVD) is the most common form of heart disease in dogs. The objective of this study was to identify cellular and metabolic pathways that play a role in DMVD by performing metabolomics and transcriptomics analyses on serum and tissue (mitral valve and left ventricle) samples previously collected from dogs with DMVD or healthy hearts. Gas or liquid chromatography followed by mass spectrophotometry were used to identify metabolites in serum. Transcriptomics analysis of tissue samples was completed using RNA-seq, and selected targets were confirmed by RT-qPCR. Random Forest analysis was used to classify the metabolites that best predicted the presence of DMVD. Results identified 41 known and 13 unknown serum metabolites that were significantly different between healthy and DMVD dogs, representing alterations in fat and glucose energy metabolism, oxidative stress, and other pathways. The three metabolites with the greatest single effect in the Random Forest analysis were γ-glutamylmethionine, oxidized glutathione, and asymmetric dimethylarginine. Transcriptomics analysis identified 812 differentially expressed transcripts in left ventricle samples and 263 in mitral valve samples, representing changes in energy metabolism, antioxidant function, nitric oxide signaling, and extracellular matrix homeostasis pathways. Many of the identified alterations may benefit from nutritional or medical management. Our study provides evidence of the growing importance of integrative approaches in multi-omics research in veterinary and nutritional sciences.

  18. Predictive value of initial FDG-PET features for treatment response and survival in esophageal cancer patients treated with chemo-radiation therapy using a random forest classifier.

    PubMed

    Desbordes, Paul; Ruan, Su; Modzelewski, Romain; Pineau, Pascal; Vauclin, Sébastien; Gouel, Pierrick; Michel, Pierre; Di Fiore, Frédéric; Vera, Pierre; Gardin, Isabelle

    2017-01-01

    In oncology, texture features extracted from positron emission tomography with 18-fluorodeoxyglucose images (FDG-PET) are of increasing interest for predictive and prognostic studies, leading to several tens of features per tumor. To select the best features, the use of a random forest (RF) classifier was investigated. Sixty-five patients with an esophageal cancer treated with a combined chemo-radiation therapy were retrospectively included. All patients underwent a pretreatment whole-body FDG-PET. The patients were followed for 3 years after the end of the treatment. The response assessment was performed 1 month after the end of the therapy. Patients were classified as complete responders and non-complete responders. Sixty-one features were extracted from medical records and PET images. First, Spearman's analysis was performed to eliminate correlated features. Then, the best predictive and prognostic subsets of features were selected using a RF algorithm. These results were compared to those obtained by a Mann-Whitney U test (predictive study) and a univariate Kaplan-Meier analysis (prognostic study). Among the 61 initial features, 28 were not correlated. From these 28 features, the best subset of complementary features found using the RF classifier to predict response was composed of 2 features: metabolic tumor volume (MTV) and homogeneity from the co-occurrence matrix. The corresponding predictive value (AUC = 0.836 ± 0.105, Se = 82 ± 9%, Sp = 91 ± 12%) was higher than the best predictive results found using the Mann-Whitney test: busyness from the gray level difference matrix (P < 0.0001, AUC = 0.810, Se = 66%, Sp = 88%). The best prognostic subset found using RF was composed of 3 features: MTV and 2 clinical features (WHO status and nutritional risk index) (AUC = 0.822 ± 0.059, Se = 79 ± 9%, Sp = 95 ± 6%), while no feature was significantly prognostic according to the Kaplan-Meier analysis. The RF classifier can improve predictive and prognostic values compared to the Mann-Whitney U test and the univariate Kaplan-Meier survival analysis when applied to several tens of features in a limited patient database.

  19. Deformation mechanisms of idealised cermets under multi-axial loading

    NASA Astrophysics Data System (ADS)

    Bele, E.; Goel, A.; Pickering, E. G.; Borstnar, G.; Katsamenis, O. L.; Pierron, F.; Danas, K.; Deshpande, V. S.

    2017-05-01

    The response of idealised cermets comprising approximately 60% by volume steel spheres in a Sn/Pb solder matrix is investigated under a range of axisymmetric compressive stress states. Digital volume correlation (DVC) anal`ysis of X-ray micro-computed tomography scans (μ-CT), and the measured macroscopic stress-strain curves of the specimens revealed two deformation mechanisms. At low triaxialities the deformation is granular in nature, with dilation occurring within shear bands. Under higher imposed hydrostatic pressures, the deformation mechanism transitions to a more homogeneous incompressible mode. However, DVC analyses revealed that under all triaxialities there are regions with local dilatory and compaction responses, with the magnitude of dilation and the number of zones wherein dilation occurs decreasing with increasing triaxiality. Two numerical models are presented in order to clarify these mechanisms: (i) a periodic unit cell model comprising nearly rigid spherical particles in a porous metal matrix and (ii) a discrete element model comprising a large random aggregate of spheres connected by non-linear normal and tangential "springs". The periodic unit cell model captured the measured stress-strain response with reasonable accuracy but under-predicted the observed dilation at the lower triaxialities, because the kinematic constraints imposed by the skeleton of rigid particles were not accurately accounted for in this model. By contrast, the discrete element model captured the kinematics and predicted both the overall levels of dilation and the simultaneous presence of both local compaction and dilatory regions with the specimens. However, the levels of dilation in this model are dependent on the assumed contact law between the spheres. Moreover, since the matrix is not explicitly included in the analysis, this model cannot be used to predict the stress-strain responses. These analyses have revealed that the complete constitutive response of cermets depends both on the kinematic constraints imposed by the particle aggregate skeleton, and the constraints imposed by the metal matrix filling the interstitial spaces in that skeleton.

  20. Machine-learned cluster identification in high-dimensional data.

    PubMed

    Ultsch, Alfred; Lötsch, Jörn

    2017-02-01

    High-dimensional biomedical data are frequently clustered to identify subgroup structures pointing at distinct disease subtypes. It is crucial that the used cluster algorithm works correctly. However, by imposing a predefined shape on the clusters, classical algorithms occasionally suggest a cluster structure in homogenously distributed data or assign data points to incorrect clusters. We analyzed whether this can be avoided by using emergent self-organizing feature maps (ESOM). Data sets with different degrees of complexity were submitted to ESOM analysis with large numbers of neurons, using an interactive R-based bioinformatics tool. On top of the trained ESOM the distance structure in the high dimensional feature space was visualized in the form of a so-called U-matrix. Clustering results were compared with those provided by classical common cluster algorithms including single linkage, Ward and k-means. Ward clustering imposed cluster structures on cluster-less "golf ball", "cuboid" and "S-shaped" data sets that contained no structure at all (random data). Ward clustering also imposed structures on permuted real world data sets. By contrast, the ESOM/U-matrix approach correctly found that these data contain no cluster structure. However, ESOM/U-matrix was correct in identifying clusters in biomedical data truly containing subgroups. It was always correct in cluster structure identification in further canonical artificial data. Using intentionally simple data sets, it is shown that popular clustering algorithms typically used for biomedical data sets may fail to cluster data correctly, suggesting that they are also likely to perform erroneously on high dimensional biomedical data. The present analyses emphasized that generally established classical hierarchical clustering algorithms carry a considerable tendency to produce erroneous results. By contrast, unsupervised machine-learned analysis of cluster structures, applied using the ESOM/U-matrix method, is a viable, unbiased method to identify true clusters in the high-dimensional space of complex data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  2. Bayesian network meta-analysis of root coverage procedures: ranking efficacy and identification of best treatment.

    PubMed

    Buti, Jacopo; Baccini, Michela; Nieri, Michele; La Marca, Michele; Pini-Prato, Giovan P

    2013-04-01

    The aim of this work was to conduct a Bayesian network meta-analysis (NM) of randomized controlled trials (RCTs) to establish a ranking in efficacy and the best technique for coronally advanced flap (CAF)-based root coverage procedures. A literature search on PubMed, Cochrane libraries, EMBASE, and hand-searched journals until June 2012 was conducted to identify RCTs on treatments of Miller Class I and II gingival recessions with at least 6 months of follow-up. The treatment outcomes were recession reduction (RecRed), clinical attachment gain (CALgain), keratinized tissue gain (KTgain), and complete root coverage (CRC). Twenty-nine studies met the inclusion criteria, 20 of which were classified as at high risk of bias. The CAF+connective tissue graft (CTG) combination ranked highest in effectiveness for RecRed (Probability of being the best = 40%) and CALgain (Pr = 33%); CAF+enamel matrix derivative (EMD) was slightly better for CRC; CAF+Collagen Matrix (CM) appeared effective for KTgain (Pr = 69%). Network inconsistency was low for all outcomes excluding CALgain. CAF+CTG might be considered the gold standard in root coverage procedures. The low amount of inconsistency gives support to the reliability of the present findings. © 2012 John Wiley & Sons A/S.

  3. Biological Assessment of a Calcium Silicate Incorporated Hydroxyapatite-Gelatin Nanocomposite: A Comparison to Decellularized Bone Matrix

    PubMed Central

    Lee, Dong Joon; Padilla, Ricardo; Zhang, He; Hu, Wei-Shou; Ko, Ching-Chang

    2014-01-01

    Our laboratory utilized biomimicry to develop a synthetic bone scaffold based on hydroxyapatite-gelatin-calcium silicate (HGCS). Here, we evaluated the potential of HGCS scaffold in bone formation in vivo using the rat calvarial critical-sized defect (CSD). Twelve Sprague-Dawley rats were randomized to four groups: control (defect only), decellularized bone matrix (DECBM), and HGCS with and without multipotent adult progenitor cells (MAPCs). DECBM was prepared by removing all the cells using SDS and NH4OH. After 12 weeks, the CSD specimens were harvested to evaluate radiographical, histological, and histomorphometrical outcomes. The in vitro osteogenic effects of the materials were studied by focal adhesion, MTS, and alizarin red. Micro-CT analysis indicated that the DECBM and the HGCS scaffold groups developed greater radiopaque areas than the other groups. Bone regeneration, assessed using histological analysis and fluorochrome labeling, was the highest in the HGCS scaffold seeded with MAPCs. The DECBM group showed limited osteoinductivity, causing a gap between the implant and host tissue. The group grafted with HGCS+MAPCs resulting in twice as much new bone formation seems to indicate a role for effective bone regeneration. In conclusion, the novel HGCS scaffold could improve bone regeneration and is a promising carrier for stem cell-mediated bone regeneration. PMID:25054149

  4. Natural learning in NLDA networks.

    PubMed

    González, Ana; Dorronsoro, José R

    2007-07-01

    Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.

  5. Random discrete linear canonical transform.

    PubMed

    Wei, Deyun; Wang, Ruikui; Li, Yuan-Min

    2016-12-01

    Linear canonical transforms (LCTs) are a family of integral transforms with wide applications in optical, acoustical, electromagnetic, and other wave propagation problems. In this paper, we propose the random discrete linear canonical transform (RDLCT) by randomizing the kernel transform matrix of the discrete linear canonical transform (DLCT). The RDLCT inherits excellent mathematical properties from the DLCT along with some fantastic features of its own. It has a greater degree of randomness because of the randomization in terms of both eigenvectors and eigenvalues. Numerical simulations demonstrate that the RDLCT has an important feature that the magnitude and phase of its output are both random. As an important application of the RDLCT, it can be used for image encryption. The simulation results demonstrate that the proposed encryption method is a security-enhanced image encryption scheme.

  6. Acellular dermal matrix for mucogingival surgery: a meta-analysis.

    PubMed

    Gapski, Ricardo; Parks, Christopher Allen; Wang, Hom-Lay

    2005-11-01

    Many clinical studies revealed the effectiveness of acellular dermal matrix (ADM) in the treatment of mucogingival defects. The purpose of this meta-analysis was to compare the efficacy of ADM-based root coverage (RC) and ADM-based increase in keratinized tissues to other commonly used mucogingival surgeries. Meta-analysis was limited to randomized clinical trials (RCT). Articles from January 1, 1990 to October 2004 related to ADM were searched utilizing the MEDLINE database from the National Library of Medicine, the Cochrane Oral Health Group Specialized Trials Registry, and through hand searches of reviews and recent journals. Relevant studies were identified, ranked independently, and mean data from each were weighted accordingly. Selected outcomes were analyzed using a meta-analysis software program. The significant estimates of the treatment effects from different trials were assessed by means of Cochrane's test of heterogeneity. 1) Few RCT studies were found to compile the data. In summary, selection identified eight RCT that met the inclusion criteria. There were four studies comparing ADM versus a connective tissue graft for root coverage procedures, two studies comparing ADM versus coronally advanced flap (CAF) for root coverage procedures, and two studies comparing ADM to free gingival graft in augmentation of keratinized tissue. 2) There were no statistically significant differences between groups for any of the outcomes measured (recession coverage, keratinized tissue formation, probing depths, and clinical attachment levels). 3) The majority of the analyses demonstrated moderate to high levels of heterogeneity. 4) Considering the heterogeneity values found among the studies, certain trends could be found: a) three out of four studies favored the ADM-RC group for recession coverage; b) a connective tissue graft tended to increase keratinized tissue compared to ADM (0.52-mm difference; P = 0.11); c) there were trends of increased clinical attachment gains comparing ADM to CAF procedures (0.56-mm difference; P = 0.16). Differences in study design and lack of data precluded an adequate and complete pooling of data for a more comprehensive analysis. Therefore, considering the trends presented in this study, there is a need for further randomized clinical studies of ADM procedures in comparison to common mucogingival surgical procedures to confirm our findings. It is difficult to draw anything other than tentative conclusions from this meta-analysis of ADM for mucogingival surgery, primarily because of the weakness in the design and reporting of existing trials.

  7. Buses of Cuernavaca—an agent-based model for universal random matrix behavior minimizing mutual information

    NASA Astrophysics Data System (ADS)

    Warchoł, Piotr

    2018-06-01

    The public transportation system of Cuernavaca, Mexico, exhibits random matrix theory statistics. In particular, the fluctuation of times between the arrival of buses on a given bus stop, follows the Wigner surmise for the Gaussian unitary ensemble. To model this, we propose an agent-based approach in which each bus driver tries to optimize his arrival time to the next stop with respect to an estimated arrival time of his predecessor. We choose a particular form of the associated utility function and recover the appropriate distribution in numerical experiments for a certain value of the only parameter of the model. We then investigate whether this value of the parameter is otherwise distinguished within an information theoretic approach and give numerical evidence that indeed it is associated with a minimum of averaged pairwise mutual information.

  8. Open Quantum Random Walks on the Half-Line: The Karlin-McGregor Formula, Path Counting and Foster's Theorem

    NASA Astrophysics Data System (ADS)

    Jacq, Thomas S.; Lardizabal, Carlos F.

    2017-11-01

    In this work we consider open quantum random walks on the non-negative integers. By considering orthogonal matrix polynomials we are able to describe transition probability expressions for classes of walks via a matrix version of the Karlin-McGregor formula. We focus on absorbing boundary conditions and, for simpler classes of examples, we consider path counting and the corresponding combinatorial tools. A non-commutative version of the gambler's ruin is studied by obtaining the probability of reaching a certain fortune and the mean time to reach a fortune or ruin in terms of generating functions. In the case of the Hadamard coin, a counting technique for boundary restricted paths in a lattice is also presented. We discuss an open quantum version of Foster's Theorem for the expected return time together with applications.

  9. Spectra of random networks in the weak clustering regime

    NASA Astrophysics Data System (ADS)

    Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen; Rodrigues, Francisco A.

    2018-03-01

    The asymptotic behavior of dynamical processes in networks can be expressed as a function of spectral properties of the corresponding adjacency and Laplacian matrices. Although many theoretical results are known for the spectra of traditional configuration models, networks generated through these models fail to describe many topological features of real-world networks, in particular non-null values of the clustering coefficient. Here we study effects of cycles of order three (triangles) in network spectra. By using recent advances in random matrix theory, we determine the spectral distribution of the network adjacency matrix as a function of the average number of triangles attached to each node for networks without modular structure and degree-degree correlations. Implications to network dynamics are discussed. Our findings can shed light in the study of how particular kinds of subgraphs influence network dynamics.

  10. Towards random matrix model of breaking the time-reversal invariance of elastic waves in chaotic cavities by feedback

    NASA Astrophysics Data System (ADS)

    Antoniuk, Oleg; Sprik, Rudolf

    2010-03-01

    We developed a random matrix model to describe the statistics of resonances in an acoustic cavity with broken time-reversal invariance. Time-reversal invariance braking is achieved by connecting an amplified feedback loop between two transducers on the surface of the cavity. The model is based on approach [1] that describes time- reversal properties of the cavity without a feedback loop. Statistics of eigenvalues (nearest neighbor resonance spacing distributions and spectral rigidity) has been calculated and compared to the statistics obtained from our experimental data. Experiments have been performed on aluminum block of chaotic shape confining ultrasound waves. [1] Carsten Draeger and Mathias Fink, One-channel time- reversal in chaotic cavities: Theoretical limits, Journal of Acoustical Society of America, vol. 105, Nr. 2, pp. 611-617 (1999)

  11. Gray level co-occurrence and random forest algorithm-based gender determination with maxillary tooth plaster images.

    PubMed

    Akkoç, Betül; Arslan, Ahmet; Kök, Hatice

    2016-06-01

    Gender is one of the intrinsic properties of identity, with performance enhancement reducing the cluster when a search is performed. Teeth have durable and resistant structure, and as such are important sources of identification in disasters (accident, fire, etc.). In this study, gender determination is accomplished by maxillary tooth plaster models of 40 people (20 males and 20 females). The images of tooth plaster models are taken with a lighting mechanism set-up. A gray level co-occurrence matrix of the image with segmentation is formed and classified via a Random Forest (RF) algorithm by extracting pertinent features of the matrix. Automatic gender determination has a 90% success rate, with an applicable system to determine gender from maxillary tooth plaster images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Effect of Oral Lipid Matrix Supplement on Fat Absorption in Cystic Fibrosis: A Randomized Placebo-Controlled Trial.

    PubMed

    Stallings, Virginia A; Schall, Joan I; Maqbool, Asim; Mascarenhas, Maria R; Alshaikh, Belal N; Dougherty, Kelly A; Hommel, Kevin; Ryan, Jamie; Elci, Okan U; Shaw, Walter A

    2016-12-01

    Pancreatic enzyme therapy does not normalize dietary fat absorption in patients with cystic fibrosis and pancreatic insufficiency. Efficacy of LYM-X-SORB (LXS), an easily absorbable lipid matrix that enhances fat absorption, was evaluated in a 12-month randomized, double-blinded, placebo-controlled trial with plasma fatty acids (FA) and coefficient of fat absorption (CFA) outcomes. A total of 110 subjects (age 10.4 ± 3.0 years) were randomized. Total FA increased with LXS at 3 and 12 months (+1.58, +1.14 mmol/L) and not with placebo (P = 0.046). With LXS, linoleic acid (LA) increased at 3 and 12 months (+298, +175 nmol/mL, P ≤ 0.046), with a 6% increase in CFA (P < 0.01). LA increase was significant in LXS versus placebo (445 vs 42 nmol/mL, P = 0.038). Increased FA and LA predicted increased body mass index Z scores. In summary, the LXS treatment improved dietary fat absorption compared with placebo as indicated by plasma FA and LA and was associated with better growth status.

  13. Comparison of cell behavior on pva/pva-gelatin electrospun nanofibers with random and aligned configuration

    NASA Astrophysics Data System (ADS)

    Huang, Chen-Yu; Hu, Keng-Hsiang; Wei, Zung-Hang

    2016-12-01

    Electrospinning technique is able to create nanofibers with specific orientation. Poly(vinyl alcohol) (PVA) have good mechanical stability but poor cell adhesion property due to the low affinity of protein. In this paper, extracellular matrix, gelatin is incorporated into PVA solution to form electrospun PVA-gelatin nanofibers membrane. Both randomly oriented and aligned nanofibers are used to investigate the topography-induced behavior of fibroblasts. Surface morphology of the fibers is studied by optical microscopy and scanning electron microscopy (SEM) coupled with image analysis. Functional group composition in PVA or PVA-gelatin is investigated by Fourier Transform Infrared (FTIR). The morphological changes, surface coverage, viability and proliferation of fibroblasts influenced by PVA and PVA-gelatin nanofibers with randomly orientated or aligned configuration are systematically compared. Fibroblasts growing on PVA-gelatin fibers show significantly larger projected areas as compared with those cultivated on PVA fibers which p-value is smaller than 0.005. Cells on PVA-gelatin aligned fibers stretch out extensively and their intracellular stress fiber pull nucleus to deform. Results suggest that instead of the anisotropic topology within the scaffold trigger the preferential orientation of cells, the adhesion of cell membrane to gelatin have substantial influence on cellular behavior.

  14. Fractal planetary rings: Energy inequalities and random field model

    NASA Astrophysics Data System (ADS)

    Malyarenko, Anatoliy; Ostoja-Starzewski, Martin

    2017-12-01

    This study is motivated by a recent observation, based on photographs from the Cassini mission, that Saturn’s rings have a fractal structure in radial direction. Accordingly, two questions are considered: (1) What Newtonian mechanics argument in support of such a fractal structure of planetary rings is possible? (2) What kinematics model of such fractal rings can be formulated? Both challenges are based on taking planetary rings’ spatial structure as being statistically stationary in time and statistically isotropic in space, but statistically nonstationary in space. An answer to the first challenge is given through an energy analysis of circular rings having a self-generated, noninteger-dimensional mass distribution [V. E. Tarasov, Int. J. Mod Phys. B 19, 4103 (2005)]. The second issue is approached by taking the random field of angular velocity vector of a rotating particle of the ring as a random section of a special vector bundle. Using the theory of group representations, we prove that such a field is completely determined by a sequence of continuous positive-definite matrix-valued functions defined on the Cartesian square F2 of the radial cross-section F of the rings, where F is a fat fractal.

  15. Simplified equation for Young's modulus of CNT reinforced concrete

    NASA Astrophysics Data System (ADS)

    Chandran, RameshBabu; Gifty Honeyta A, Maria

    2017-12-01

    This research investigation focuses on finite element modeling of carbon nanotube (CNT) reinforced concrete matrix for three grades of concrete namely M40, M60 and M120. Representative volume element (RVE) was adopted and one-eighth model depicting the CNT reinforced concrete matrix was simulated using FEA software ANSYS17.2. Adopting random orientation of CNTs, with nine fibre volume fractions from 0.1% to 0.9%, finite element modeling simulations replicated exactly the CNT reinforced concrete matrix. Upon evaluations of the model, the longitudinal and transverse Young's modulus of elasticity of the CNT reinforced concrete was arrived. The graphical plots between various fibre volume fractions and the concrete grade revealed simplified equation for estimating the young's modulus. It also exploited the fact that the concrete grade does not have significant impact in CNT reinforced concrete matrix.

  16. Efficient two-dimensional compressive sensing in MIMO radar

    NASA Astrophysics Data System (ADS)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  17. Game of Life on the Equal Degree Random Lattice

    NASA Astrophysics Data System (ADS)

    Shao, Zhi-Gang; Chen, Tao

    2010-12-01

    An effective matrix method is performed to build the equal degree random (EDR) lattice, and then a cellular automaton game of life on the EDR lattice is studied by Monte Carlo (MC) simulation. The standard mean field approximation (MFA) is applied, and then the density of live cells is given ρ=0.37017 by MFA, which is consistent with the result ρ=0.37±0.003 by MC simulation.

  18. Consumption of high-fat meal containing cheese compared with vegan alternative lowers postprandial C-reactive protein in overweight and obese individuals with metabolic abnormalities: a randomized controlled cross-over study

    USDA-ARS?s Scientific Manuscript database

    Dietary recommendations suggest decreased consumption of SFA to minimize CVD risk; however, not all foods rich in SFA are equivalent. To evaluate the effects of SFA in a dairy food matrix, as Cheddar cheese, v. SFA from a vegan-alternative test meal on postprandial inflammatory markers, a randomized...

  19. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  20. Classification of acoustic emission signals using wavelets and Random Forests : Application to localized corrosion

    NASA Astrophysics Data System (ADS)

    Morizet, N.; Godin, N.; Tang, J.; Maillet, E.; Fregonese, M.; Normand, B.

    2016-03-01

    This paper aims to propose a novel approach to classify acoustic emission (AE) signals deriving from corrosion experiments, even if embedded into a noisy environment. To validate this new methodology, synthetic data are first used throughout an in-depth analysis, comparing Random Forests (RF) to the k-Nearest Neighbor (k-NN) algorithm. Moreover, a new evaluation tool called the alter-class matrix (ACM) is introduced to simulate different degrees of uncertainty on labeled data for supervised classification. Then, tests on real cases involving noise and crevice corrosion are conducted, by preprocessing the waveforms including wavelet denoising and extracting a rich set of features as input of the RF algorithm. To this end, a software called RF-CAM has been developed. Results show that this approach is very efficient on ground truth data and is also very promising on real data, especially for its reliability, performance and speed, which are serious criteria for the chemical industry.

  1. Hierarchical Bayesian spatial models for predicting multiple forest variables using waveform LiDAR, hyperspectral imagery, and large inventory datasets

    USGS Publications Warehouse

    Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.

    2013-01-01

    In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.

  2. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  3. Random Initialisation of the Spectral Variables: an Alternate Approach for Initiating Multivariate Curve Resolution Alternating Least Square (MCR-ALS) Analysis.

    PubMed

    Kumar, Keshav

    2017-11-01

    Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.

  4. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  5. Micromechanics and effective elastoplastic behavior of two-phase metal matrix composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ju, J.W.; Chen, T.M.

    A micromechanical framework is presented to predict effective (overall) elasto-(visco-)plastic behavior of two-phase particle-reinforced metal matrix composites (PRMMC). In particular, the inclusion phase (particle) is assumed to be elastic and the matrix material is elasto-(visco-)plastic. Emanating from Ju and Chen's (1994a,b) work on effective elastic properties of composites containing many randomly dispersed inhomogeneities, effective elastoplastic deformations and responses of PRMMC are estimated by means of the effective yield criterion'' derived micromechanically by considering effects due to elastic particles embedded in the elastoplastic matrix. The matrix material is elastic or plastic, depending on local stress and deformation, and obeys general plasticmore » flow rule and hardening law. Arbitrary (general) loadings and unloadings are permitted in the framework through the elastic predictor-plastic corrector two-step operator splitting methodology. The proposed combined micromechanical and computational approach allows one to estimate overall elastoplastic responses of PRMMCs by accounting for the microstructural information (such as the spatial distribution and micro-geometry of particles), elastic properties of constituent phases, and the plastic behavior of the matrix-only materials.« less

  6. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.

    PubMed

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-09-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.

  7. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  8. Guiding the orientation of smooth muscle cells on random and aligned polyurethane/collagen nanofibers.

    PubMed

    Jia, Lin; Prabhakaran, Molamma P; Qin, Xiaohong; Ramakrishna, Seeram

    2014-09-01

    Fabricating scaffolds that can simulate the architecture and functionality of native extracellular matrix is a huge challenge in vascular tissue engineering. Various kinds of materials are engineered via nano-technological approaches to meet the current challenges in vascular tissue regeneration. During this study, nanofibers from pure polyurethane and hybrid polyurethane/collagen in two different morphologies (random and aligned) and in three different ratios of polyurethane:collagen (75:25; 50:50; 25:75) are fabricated by electrospinning. The fiber diameters of the nanofibrous scaffolds are in the range of 174-453 nm and 145-419 for random and aligned fibers, respectively, where they closely mimic the nanoscale dimensions of native extracellular matrix. The aligned polyurethane/collagen nanofibers expressed anisotropic wettability with mechanical properties which is suitable for regeneration of the artery. After 12 days of human aortic smooth muscle cells culture on different scaffolds, the proliferation of smooth muscle cells on hybrid polyurethane/collagen (3:1) nanofibers was 173% and 212% higher than on pure polyurethane scaffolds for random and aligned scaffolds, respectively. The results of cell morphology and protein staining showed that the aligned polyurethane/collagen (3:1) scaffold promote smooth muscle cells alignment through contact guidance, while the random polyurethane/collagen (3:1) also guided cell orientation most probably due to the inherent biochemical composition. Our studies demonstrate the potential of aligned and random polyurethane/collagen (3:1) as promising substrates for vascular tissue regeneration. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. The Development and Application of Random Matrix Theory in Adaptive Signal Processing in the Sample Deficient Regime

    DTIC Science & Technology

    2014-09-01

    optimal diagonal loading which minimizes the MSE. The be- havior of optimal diagonal loading when the arrival process is composed of plane waves embedded...observation vectors. The examples of the ensemble correlation matrix corresponding to the input process consisting of a single or multiple plane waves...Y ∗ij is a complex-conjugate of Yij. This result is used in order to evaluate the expectations of different quadratic forms. The Poincare -Nash

  10. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    PubMed

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  11. Dynamic heterogeneities and non-Gaussian behavior in two-dimensional randomly confined colloidal fluids

    NASA Astrophysics Data System (ADS)

    Schnyder, Simon K.; Skinner, Thomas O. E.; Thorneywork, Alice L.; Aarts, Dirk G. A. L.; Horbach, Jürgen; Dullens, Roel P. A.

    2017-03-01

    A binary mixture of superparamagnetic colloidal particles is confined between glass plates such that the large particles become fixed and provide a two-dimensional disordered matrix for the still mobile small particles, which form a fluid. By varying fluid and matrix area fractions and tuning the interactions between the superparamagnetic particles via an external magnetic field, different regions of the state diagram are explored. The mobile particles exhibit delocalized dynamics at small matrix area fractions and localized motion at high matrix area fractions, and the localization transition is rounded by the soft interactions [T. O. E. Skinner et al., Phys. Rev. Lett. 111, 128301 (2013), 10.1103/PhysRevLett.111.128301]. Expanding on previous work, we find the dynamics of the tracers to be strongly heterogeneous and show that molecular dynamics simulations of an ideal gas confined in a fixed matrix exhibit similar behavior. The simulations show how these soft interactions make the dynamics more heterogeneous compared to the disordered Lorentz gas and lead to strong non-Gaussian fluctuations.

  12. Power Spectrum of Long Eigenlevel Sequences in Quantum Chaotic Systems.

    PubMed

    Riser, Roman; Osipov, Vladimir Al; Kanzieper, Eugene

    2017-05-19

    We present a nonperturbative analysis of the power spectrum of energy level fluctuations in fully chaotic quantum structures. Focusing on systems with broken time-reversal symmetry, we employ a finite-N random matrix theory to derive an exact multidimensional integral representation of the power spectrum. The N→∞ limit of the exact solution furnishes the main result of this study-a universal, parameter-free prediction for the power spectrum expressed in terms of a fifth Painlevé transcendent. Extensive numerics lends further support to our theory which, as discussed at length, invalidates a traditional assumption that the power spectrum is merely determined by the spectral form factor of a quantum system.

  13. Asymptotic Expansion of β Matrix Models in the One-cut Regime

    NASA Astrophysics Data System (ADS)

    Borot, G.; Guionnet, A.

    2013-01-01

    We prove the existence of a 1/ N expansion to all orders in β matrix models with a confining, offcritical potential corresponding to an equilibrium measure with a connected support. Thus, the coefficients of the expansion can be obtained recursively by the "topological recursion" derived in Chekhov and Eynard (JHEP 0612:026, 2006). Our method relies on the combination of a priori bounds on the correlators and the study of Schwinger-Dyson equations, thanks to the uses of classical complex analysis techniques. These a priori bounds can be derived following (Boutet de Monvel et al. in J Stat Phys 79(3-4):585-611, 1995; Johansson in Duke Math J 91(1):151-204, 1998; Kriecherbauer and Shcherbina in Fluctuations of eigenvalues of matrix models and their applications, 2010) or for strictly convex potentials by using concentration of measure (Anderson et al. in An introduction to random matrices, Sect. 2.3, Cambridge University Press, Cambridge, 2010). Doing so, we extend the strategy of Guionnet and Maurel-Segala (Ann Probab 35:2160-2212, 2007), from the hermitian models ( β = 2) and perturbative potentials, to general β models. The existence of the first correction in 1/ N was considered in Johansson (1998) and more recently in Kriecherbauer and Shcherbina (2010). Here, by taking similar hypotheses, we extend the result to all orders in 1/ N.

  14. Implementation of a quantum random number generator based on the optimal clustering of photocounts

    NASA Astrophysics Data System (ADS)

    Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.

    2017-10-01

    To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.

  15. Joint Procrustes Analysis for Simultaneous Nonsingular Transformation of Component Score and Loading Matrices

    ERIC Educational Resources Information Center

    Adachi, Kohei

    2009-01-01

    In component analysis solutions, post-multiplying a component score matrix by a nonsingular matrix can be compensated by applying its inverse to the corresponding loading matrix. To eliminate this indeterminacy on nonsingular transformation, we propose Joint Procrustes Analysis (JPA) in which component score and loading matrices are simultaneously…

  16. An improved label propagation algorithm based on node importance and random walk for community detection

    NASA Astrophysics Data System (ADS)

    Ma, Tianren; Xia, Zhengyou

    2017-05-01

    Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.

  17. Blood cells transcriptomics as source of potential biomarkers of articular health improvement: effects of oral intake of a rooster combs extract rich in hyaluronic acid.

    PubMed

    Sánchez, Juana; Bonet, M Luisa; Keijer, Jaap; van Schothorst, Evert M; Mölller, Ingrid; Chetrit, Carles; Martinez-Puig, Daniel; Palou, Andreu

    2014-09-01

    The aim of the study was to explore peripheral blood gene expression as a source of biomarkers of joint health improvement related to glycosaminoglycan (GAG) intake in humans. Healthy individuals with joint discomfort were enrolled in a randomized, double-blind, placebo-controlled intervention study in humans. Subjects ate control yoghurt or yoghurt supplemented with a recently authorized novel food in Europe containing hyaluronic acid (65 %) from rooster comb (Mobilee™ as commercial name) for 90 days. Effects on functional quality-of-life parameters related to joint health were assessed. Whole-genome microarray analysis of peripheral blood samples from a subset of 20 subjects (10 placebo and 10 supplemented) collected pre- and post-intervention was performed. Mobilee™ supplementation reduced articular pain intensity and synovial effusion and improved knee muscular strength indicators as compared to placebo. About 157 coding genes were differentially expressed in blood cells between supplemented and placebo groups post-intervention, but not pre-intervention (p < 0.05; fold change ≥1.2). Among them, a reduced gene expression of glucuronidase-beta (GUSB), matrix metallopeptidase 23B (MMP23B), xylosyltransferase II (XYLT2), and heparan sulfate 6-O-sulfotransferase 1 (HS6ST1) was found in the supplemented group. Correlation analysis indicated a direct relationship between blood cell gene expression of MMP23B, involved in the breakdown of the extracellular matrix, and pain intensity, and an inverse relationship between blood cell gene expression of HS6ST1, responsible for 6-O-sulfation of heparan sulfate, and indicators of knee muscular strength. Expression levels of specific genes in blood cells, in particular genes related to GAG metabolism and extracellular matrix dynamics, are potential biomarkers of beneficial effects on articular health.

  18. Comparison of three controllers applied to helicopter vibration

    NASA Technical Reports Server (NTRS)

    Leyland, Jane A.

    1992-01-01

    A comparison was made of the applicability and suitability of the deterministic controller, the cautious controller, and the dual controller for the reduction of helicopter vibration by using higher harmonic blade pitch control. A randomly generated linear plant model was assumed and the performance index was defined to be a quadratic output metric of this linear plant. A computer code, designed to check out and evaluate these controllers, was implemented and used to accomplish this comparison. The effects of random measurement noise, the initial estimate of the plant matrix, and the plant matrix propagation rate were determined for each of the controllers. With few exceptions, the deterministic controller yielded the greatest vibration reduction (as characterized by the quadratic output metric) and operated with the greatest reliability. Theoretical limitations of these controllers were defined and appropriate candidate alternative methods, including one method particularly suitable to the cockpit, were identified.

  19. Almost sure convergence in quantum spin glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu

    2015-12-15

    Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less

  20. Lossy chaotic electromagnetic reverberation chambers: Universal statistical behavior of the vectorial field

    NASA Astrophysics Data System (ADS)

    Gros, J.-B.; Kuhl, U.; Legrand, O.; Mortessagne, F.

    2016-03-01

    The effective Hamiltonian formalism is extended to vectorial electromagnetic waves in order to describe statistical properties of the field in reverberation chambers. The latter are commonly used in electromagnetic compatibility tests. As a first step, the distribution of wave intensities in chaotic systems with varying opening in the weak coupling limit for scalar quantum waves is derived by means of random matrix theory. In this limit the only parameters are the modal overlap and the number of open channels. Using the extended effective Hamiltonian, we describe the intensity statistics of the vectorial electromagnetic eigenmodes of lossy reverberation chambers. Finally, the typical quantity of interest in such chambers, namely, the distribution of the electromagnetic response, is discussed. By determining the distribution of the phase rigidity, describing the coupling to the environment, using random matrix numerical data, we find good agreement between the theoretical prediction and numerical calculations of the response.

  1. LC-MS/MS signal suppression effects in the analysis of pesticides in complex environmental matrices.

    PubMed

    Choi, B K; Hercules, D M; Gusev, A I

    2001-02-01

    The application of LC separation and mobile phase additives in addressing LC-MS/MS matrix signal suppression effects for the analysis of pesticides in a complex environmental matrix was investigated. It was shown that signal suppression is most significant for analytes eluting early in the LC-MS analysis. Introduction of different buffers (e.g. ammonium formate, ammonium hydroxide, formic acid) into the LC mobile phase was effective in improving signal correlation between the matrix and standard samples. The signal improvement is dependent on buffer concentration as well as LC separation of the matrix components. The application of LC separation alone was not effective in addressing suppression effects when characterizing complex matrix samples. Overloading of the LC column by matrix components was found to significantly contribute to analyte-matrix co-elution and suppression of signal. This signal suppression effect can be efficiently compensated by 2D LC (LC-LC) separation techniques. The effectiveness of buffers and LC separation in improving signal correlation between standard and matrix samples is discussed.

  2. Entanglement dynamics in critical random quantum Ising chain with perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yichen, E-mail: ychuang@caltech.edu

    We simulate the entanglement dynamics in a critical random quantum Ising chain with generic perturbations using the time-evolving block decimation algorithm. Starting from a product state, we observe super-logarithmic growth of entanglement entropy with time. The numerical result is consistent with the analytical prediction of Vosk and Altman using a real-space renormalization group technique. - Highlights: • We study the dynamical quantum phase transition between many-body localized phases. • We simulate the dynamics of a very long random spin chain with matrix product states. • We observe numerically super-logarithmic growth of entanglement entropy with time.

  3. Genetic diversity of popcorn genotypes using molecular analysis.

    PubMed

    Resh, F S; Scapim, C A; Mangolin, C A; Machado, M F P S; do Amaral, A T; Ramos, H C C; Vivas, M

    2015-08-19

    In this study, we analyzed dominant molecular markers to estimate the genetic divergence of 26 popcorn genotypes and evaluate whether using various dissimilarity coefficients with these dominant markers influences the results of cluster analysis. Fifteen random amplification of polymorphic DNA primers produced 157 amplified fragments, of which 65 were monomorphic and 92 were polymorphic. To calculate the genetic distances among the 26 genotypes, the complements of the Jaccard, Dice, and Rogers and Tanimoto similarity coefficients were used. A matrix of Dij values (dissimilarity matrix) was constructed, from which the genetic distances among genotypes were represented in a more simplified manner as a dendrogram generated using the unweighted pair-group method with arithmetic average. Clusters determined by molecular analysis generally did not group material from the same parental origin together. The largest genetic distance was between varieties 17 (UNB-2) and 18 (PA-091). In the identification of genotypes with the smallest genetic distance, the 3 coefficients showed no agreement. The 3 dissimilarity coefficients showed no major differences among their grouping patterns because agreement in determining the genotypes with large, medium, and small genetic distances was high. The largest genetic distances were observed for the Rogers and Tanimoto dissimilarity coefficient (0.74), followed by the Jaccard coefficient (0.65) and the Dice coefficient (0.48). The 3 coefficients showed similar estimations for the cophenetic correlation coefficient. Correlations among the matrices generated using the 3 coefficients were positive and had high magnitudes, reflecting strong agreement among the results obtained using the 3 evaluated dissimilarity coefficients.

  4. Examining significant factors in micro and small enterprises performance: case study in Amhara region, Ethiopia

    NASA Astrophysics Data System (ADS)

    Cherkos, Tomas; Zegeye, Muluken; Tilahun, Shimelis; Avvari, Muralidhar

    2017-07-01

    Furniture manufacturing micro and small enterprises are confronted with several factors that affect their performance. Some enterprises fail to sustain, some others remain for long period of time without transforming, and most are producing similar and non-standard products. The main aim of this manuscript is on improving the performance and contribution of MSEs by analyzing impact of significant internal and external factors. Data was collected via a questionnaire, group discussion with experts and interviewing process. Randomly selected eight representative main cities of Amhara region with 120 furniture manufacturing enterprises are considered. Data analysis and presentation was made using SPSS tools (correlation, proximity, and T test) and impact-effort analysis matrix tool. The correlation analysis shows that politico-legal with infrastructure, leadership with entrepreneurship skills and finance and credit with marketing factors are those factors, which result in high correlation with Pearson correlation values of r = 0.988, 0.983, and 0.939, respectively. The study investigates that the most critical factors faced by MSEs are work premises, access to finance, infrastructure, entrepreneurship and business managerial problems. The impact of these factors is found to be high and is confirmed by the 50% drop-out rate in 2014/2015. Furthermore, more than 25% work time losses due to power interruption daily and around 65% work premises problems challenged MSEs. Further, an impact-effort matrix was developed to help the MSEs to prioritize the affecting factors.

  5. Examining significant factors in micro and small enterprises performance: case study in Amhara region, Ethiopia

    NASA Astrophysics Data System (ADS)

    Cherkos, Tomas; Zegeye, Muluken; Tilahun, Shimelis; Avvari, Muralidhar

    2018-07-01

    Furniture manufacturing micro and small enterprises are confronted with several factors that affect their performance. Some enterprises fail to sustain, some others remain for long period of time without transforming, and most are producing similar and non-standard products. The main aim of this manuscript is on improving the performance and contribution of MSEs by analyzing impact of significant internal and external factors. Data was collected via a questionnaire, group discussion with experts and interviewing process. Randomly selected eight representative main cities of Amhara region with 120 furniture manufacturing enterprises are considered. Data analysis and presentation was made using SPSS tools (correlation, proximity, and T test) and impact-effort analysis matrix tool. The correlation analysis shows that politico-legal with infrastructure, leadership with entrepreneurship skills and finance and credit with marketing factors are those factors, which result in high correlation with Pearson correlation values of r = 0.988, 0.983, and 0.939, respectively. The study investigates that the most critical factors faced by MSEs are work premises, access to finance, infrastructure, entrepreneurship and business managerial problems. The impact of these factors is found to be high and is confirmed by the 50% drop-out rate in 2014/2015. Furthermore, more than 25% work time losses due to power interruption daily and around 65% work premises problems challenged MSEs. Further, an impact-effort matrix was developed to help the MSEs to prioritize the affecting factors.

  6. Quantitative image analysis for investigating cell-matrix interactions

    NASA Astrophysics Data System (ADS)

    Burkel, Brian; Notbohm, Jacob

    2017-07-01

    The extracellular matrix provides both chemical and physical cues that control cellular processes such as migration, division, differentiation, and cancer progression. Cells can mechanically alter the matrix by applying forces that result in matrix displacements, which in turn may localize to form dense bands along which cells may migrate. To quantify the displacements, we use confocal microscopy and fluorescent labeling to acquire high-contrast images of the fibrous material. Using a technique for quantitative image analysis called digital volume correlation, we then compute the matrix displacements. Our experimental technology offers a means to quantify matrix mechanics and cell-matrix interactions. We are now using these experimental tools to modulate mechanical properties of the matrix to study cell contraction and migration.

  7. An information hidden model holding cover distributions

    NASA Astrophysics Data System (ADS)

    Fu, Min; Cai, Chao; Dai, Zuxu

    2018-03-01

    The goal of steganography is to embed secret data into a cover so no one apart from the sender and intended recipients can find the secret data. Usually, the way the cover changing was decided by a hidden function. There were no existing model could be used to find an optimal function which can greatly reduce the distortion the cover suffered. This paper considers the cover carrying secret message as a random Markov chain, taking the advantages of a deterministic relation between initial distributions and transferring matrix of the Markov chain, and takes the transferring matrix as a constriction to decrease statistical distortion the cover suffered in the process of information hiding. Furthermore, a hidden function is designed and the transferring matrix is also presented to be a matrix from the original cover to the stego cover. Experiment results show that the new model preserves a consistent statistical characterizations of original and stego cover.

  8. Partial restoration of isospin symmetry for neutrinoless double β decay in the deformed nuclear system of 150Nd

    NASA Astrophysics Data System (ADS)

    Fang, Dong-Liang; Faessler, Amand; Simkovic, Fedor

    2015-10-01

    In this work, we calculate the matrix elements for the 0 ν β β decay of 150Nd using the deformed quasiparticle random-phase approximation (p n -QRPA) method. We adopted the approach introduced by Rodin and Faessler [Phys. Rev. C 84, 014322 (2011), 10.1103/PhysRevC.84.014322] and Simkovic et al. [Phys. Rev. C 87, 045501 (2013), 10.1103/PhysRevC.87.045501] to restore the isospin symmetry by enforcing MF2 ν=0 . We found that with this restoration, the Fermi matrix elements are reduced in the strongly deformed 150Nd by about 15 to 20%, while the more important Gamow-Teller matrix elements remain the same. The results of an enlarged model space are also presented. This enlargement increases the total (Fermi plus Gamow-Teller) matrix elements by less than 10%.

  9. Unsupervised Bayesian linear unmixing of gene expression microarrays.

    PubMed

    Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O

    2013-03-19

    This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.

  10. Prospective randomized comparison of scar appearances between cograft of acellular dermal matrix with autologous split-thickness skin and autologous split-thickness skin graft alone for full-thickness skin defects of the extremities.

    PubMed

    Yi, Ju Won; Kim, Jae Kwang

    2015-03-01

    The purpose of this study was to evaluate the clinical outcomes of cografting of acellular dermal matrix with autologous split-thickness skin and autologous split-thickness skin graft alone for full-thickness skin defects on the extremities. In this prospective randomized study, 19 consecutive patients with full-thickness skin defects on the extremities following trauma underwent grafting using either cograft of acellular dermal matrix with autologous split-thickness skin graft (nine patients, group A) or autologous split-thickness skin graft alone (10 patients, group B) from June of 2011 to December of 2012. The postoperative evaluations included observation of complications (including graft necrosis, graft detachment, or seroma formation) and Vancouver Scar Scale score. No statistically significant difference was found regarding complications, including graft necrosis, graft detachment, or seroma formation. At week 8, significantly lower Vancouver Scar Scale scores for vascularity, pliability, height, and total score were found in group A compared with group B. At week 12, lower scores for pliability and height and total scores were identified in group A compared with group B. For cases with traumatic full-thickness skin defects on the extremities, a statistically significant better result was achieved with cograft of acellular dermal matrix with autologous split-thickness skin graft than with autologous split-thickness skin graft alone in terms of Vancouver Scar Scale score. Therapeutic, II.

  11. The brittle-viscous-plastic evolution of shear bands in the South Armorican Shear Zone

    NASA Astrophysics Data System (ADS)

    Bukovská, Zita; Jeřábek, Petr; Morales, Luiz F. G.; Lexa, Ondrej; Milke, Ralf

    2014-05-01

    Shear bands are microscale shear zones that obliquely crosscut an existing anisotropy such as a foliation. The resulting S-C fabrics are characterized by angles lower than 45° and the C plane parallel to shear zone boundaries. The S-C fabrics typically occur in granitoids deformed at greenschist facies conditions in the vicinity of major shear zones. Despite their long recognition, mechanical reasons for localization of deformation into shear bands and their evolution is still poorly understood. In this work we focus on microscale characterization of the shear bands in the South Armorican Shear Zone, where the S-C fabrics were first recognized by Berthé et al. (1979). The initiation of shear bands in the right-lateral South Armorican Shear Zone is associated with the occurrence of microcracks crosscutting the recrystallized quartz aggregates that define the S fabric. In more advanced stages of shear band evolution, newly formed dominant K-feldspar, together with plagioclase, muscovite and chlorite occur in the microcracks, and the shear bands start to widen. K-feldspar replaces quartz by progressively bulging into the grain boundaries of recrystallized quartz grains, leading to disintegration of quartz aggregates and formation of fine-grained multiphase matrix mixture. The late stages of shear band development are marked by interconnection of fine-grained white mica into a band that crosscuts the original shear band matrix. In its extremity, the shear band widening may lead to the formation of ultramylonites. With the increasing proportion of shear band matrix from ~1% to ~12%, the angular relationship between S and C fabrics increases from ~30° to ~40°. The matrix phases within shear bands show differences in chemical composition related to distinct evolutionary stages of shear band formation. The chemical evolution is well documented in K-feldspar, where the albite component is highest in porphyroclasts within S fabric, lower in the newly formed grains within microcracks and nearly absent in matrix grains in the well developed C bands. The chemical variation between primary and secondary new-formed micas was clearly identified by the Mg-Ti-Na content. The microstructural analysis documents a progressive decrease in quartz grain size and increasing interconnectivity of K-feldspar and white mica towards more mature shear bands. The contact-frequency analysis demonstrates that the phase distribution in shear bands tends to evolve from quartz aggregate distribution via randomization to K-feldspar aggregate distribution. The boundary preferred orientation is absent in quartz-quartz contacts either inside of outside the C bands, while it changes from random to parallel to the C band for the K-feldspar and and K-feldspar-quartz boundaries. The lack of crystallographic preferred orientation of the individual phases in the mixed matrix of the C planes suggests a dominant diffusion-assisted grain boundary sliding deformation mechanism. In the later stages of shear band development, the deformation is accommodated by crystal plasticity of white mica in micaceous bands. The crystallographic and microstructural data thus indicate two important switches in deformation mechanisms, from (i) brittle to Newtonian viscous behavior in the initial stages of shear band evolution and from (ii) Newtonian viscous to power law in the later evolutionary stages. The evolution of shear bands in the South Armorican Shear Zone thus document the interplay between deformation mechanisms and chemical reactions in deformed granitoids.

  12. Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Arav, Marina

    2006-01-01

    In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…

  13. MMX Multi Matrix System mesalazine for the induction of remission in patients with mild-to-moderate ulcerative colitis: a combined analysis of two randomized, double-blind, placebo-controlled trials.

    PubMed

    Sandborn, W J; Kamm, M A; Lichtenstein, G R; Lyne, A; Butler, T; Joseph, R E

    2007-07-15

    MMX mesalazine [LIALDA (US), MEZAVANT XL (UK and Ireland) MEZAVANT (elsewhere)] utilizes MMX Multi Matrix System (MMX) technology which delivers mesalazine throughout the colon. Two phase III studies have already evaluated MMX mesalazine in patients with active, mild-to-moderate ulcerative colitis. Aim To provide more precise estimates of the efficacy of MMX mesalazine over placebo by combining the patient populations from the two phase III studies. Methods Combined data from two 8-week, double-blind, placebo-controlled trials were analyzed. Patients randomized to MMX mesalazine 2.4 g/day (once daily or 1.2 g twice daily), 4.8 g/day (once daily) or placebo were reviewed. The primary end point was clinical and endoscopic remission (modified Ulcerative Colitis-Disease Activity Index of /=1-point reduction in sigmoidoscopy score from week 0). Results Data from 517 patients were analysed. 8-week remission rates were 37.2% and 35.1% in the MMX mesalazine 2.4 g/day and 4.8 g/day groups, vs. 17.5% on placebo (P < 0.001, both comparisons). 8-week complete mucosal healing rates were 32% in both MMX mesalazine groups compared with 16% on placebo. Adverse event frequency was similar in all groups. Conclusion MMX mesalazine is effective and generally well tolerated for inducing clinical and endoscopic remission of active, mild-to-moderate ulcerative colitis.

  14. Synergistic action of protease-modulating matrix and autologous growth factors in healing of diabetic foot ulcers. A prospective randomized trial.

    PubMed

    Kakagia, Despoina D; Kazakos, Konstantinos J; Xarchas, Konstantinos C; Karanikas, Michael; Georgiadis, George S; Tripsiannis, Gregory; Manolas, Constantinos

    2007-01-01

    This study tests the hypothesis that addition of a protease-modulating matrix enhances the efficacy of autologous growth factors in diabetic ulcers. Fifty-one patients with chronic diabetic foot ulcers were managed as outpatients at the Democritus University Hospital of Alexandroupolis and followed up for 8 weeks. All target ulcers were > or = 2.5 cm in any one dimension and had been previously treated only with moist gauze. Patients were randomly allocated in three groups of 17 patients each: Group A was treated only with the oxidized regenerated cellulose/collagen biomaterial (Promogran, Johnson & Johnson, New Brunswick, NJ), Group B was treated only with autologous growth factors delivered by Gravitational Platelet Separation System (GPS, Biomet), and Group C was managed by a combination of both. All ulcers were digitally photographed at initiation of the study and then at change of dressings once weekly. Computerized planimetry (Texas Health Science Center ImageTool, Version 3.0) was used to assess ulcer dimensions that were analyzed for homogeneity and significance using the Statistical Package for Social Sciences, Version 13.0. Post hoc analysis revealed that there was significantly greater reduction of all three dimensions of the ulcers in Group C compared to Groups A and B (all P<.001). Although reduction of ulcer dimensions was greater in Group A than in Group B, these differences did not reach statistical significance. It is concluded that protease-modulating dressings act synergistically with autologous growth factors and enhance their efficacy in diabetic foot ulcers.

  15. How random is a random vector?

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  16. Statistics of time delay and scattering correlation functions in chaotic systems. II. Semiclassical approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel

    2015-06-15

    We consider S-matrix correlation functions for a chaotic cavity having M open channels, in the absence of time-reversal invariance. Relying on a semiclassical approximation, we compute the average over E of the quantities Tr[S{sup †}(E − ϵ) S(E + ϵ)]{sup n}, for general positive integer n. Our result is an infinite series in ϵ, whose coefficients are rational functions of M. From this, we extract moments of the time delay matrix Q = − iħS{sup †}dS/dE and check that the first 8 of them agree with the random matrix theory prediction from our previous paper [M. Novaes, J. Math. Phys.more » 56, 062110 (2015)].« less

  17. Propagation of Circularly Polarized Light Through a Two-Dimensional Random Medium

    NASA Astrophysics Data System (ADS)

    Gorodnichev, E. E.

    2017-12-01

    The problem of small-angle multiple-scattering of circularly polarized light in a two-dimensional medium with large fiberlike inhomogeneities is studied. The attenuation lengths for elements the density matrix are calculated. It is found that with increasing the sample thickness the intensity of waves polarized along the fibers decays faster than the other density matrix elements. With further increase in the thickness, the off-diagonal element which is responsible for correlation between the cross-polarized waves dissapears. In the case of very thick samples the scattered field proves to be polarized perpendicular to the fibers. It is shown that the difference in the attenuation lengths of the density matrix elements results in a non-monotonic depth dependence of the degree of polarization.

  18. Finite element investigation of temperature dependence of elastic properties of carbon nanotube reinforced polypropylene

    NASA Astrophysics Data System (ADS)

    Ahmadi, Masoud; Ansari, Reza; Rouhi, Saeed

    2017-11-01

    This paper aims to investigate the elastic modulus of the polypropylene matrix reinforced by carbon nanotubes at different temperatures. To this end, the finite element approach is employed. The nanotubes with different volume fractions and aspect ratios (the ratio of length to diameter) are embedded in the polymer matrix. Besides, random and regular algorithms are utilized to disperse carbon nanotubes in the matrix. It is seen that as the pure polypropylene, the elastic modulus of carbon nanotube reinforced polypropylene decreases by increasing the temperature. It is also observed that when the carbon nanotubes are dispersed parallelly and the load is applied along the nanotube directions, the largest improvement in the elastic modulus of the nanotube/polypropylene nanocomposites is obtained.

  19. Platelet-Derived Growth Factor Promotes Periodontal Regeneration in Localized Osseous Defects: 36-Month Extension Results From a Randomized, Controlled, Double-Masked Clinical Trial

    PubMed Central

    Nevins, Myron; Kao, Richard T.; McGuire, Michael K.; McClain, Pamela K.; Hinrichs, James E.; McAllister, Bradley S.; Reddy, Michael S.; Nevins, Marc L.; Genco, Robert J.; Lynch, Samuel E.; Giannobile, William V.

    2017-01-01

    Background Recombinant human platelet-derived growth factor (rhPDGF) is safe and effective for the treatment of periodontal defects in short-term studies up to 6 months in duration. We now provide results from a 36-month extension study of a multicenter, randomized, controlled clinical trial evaluating the effect and long-term stability of PDGF-BB treatment in patients with localized severe periodontal osseous defects. Methods A total of 135 participants were enrolled fromsix clinical centers for an extension trial. Eighty-three individuals completed the study at 36 months and were included in the analysis. The study investigated the local application of β-tricalcium phosphate scaffold matrix with or without two different dose levels of PDGF (0.3 or 1.0 mg/mL PDGF-BB) in patients possessing one localized periodontal osseous defect. Composite analysis for clinical and radiographic evidence of treatment success was defined as percentage of cases with clinical attachment level (CAL) ≥2.7mmand linear bone growth (LBG) ≥1.1 mm. Results The participants exceeding this composite outcome benchmark in the 0.3 mg/mL rhPDGF-BB group went from 62.2% at 12 months, 75.9% at 24 months, to 87.0% at 36 months compared with 39.5%, 48.3%, and 53.8%, respectively, in the scaffold control group at these same time points (P <0.05). Although there were no significant increases in CAL and LBG at 36 months among all groups, there were continued increases in CAL gain, LBG, and percentage bone fill over time, suggesting overall stability of the regenerative response. Conclusion PDGF-BB in a synthetic scaffold matrix promotes long-term stable clinical and radiographic improvements as measured by composite outcomes for CAL gain and LBG for patients possessing localized periodontal defects (ClinicalTrials.gov no. CT01530126). PMID:22612364

  20. Mécano-Stimulation™ of the skin improves sagging score and induces beneficial functional modification of the fibroblasts: clinical, biological, and histological evaluations

    PubMed Central

    Humbert, Philippe; Fanian, Ferial; Lihoreau, Thomas; Jeudy, Adeline; Elkhyat, Ahmed; Robin, Sophie; Courderot-Masuyer, Carol; Tauzin, Hélène; Lafforgue, Christine; Haftek, Marek

    2015-01-01

    Background Loss of mechanical tension appears to be the major factor underlying decreased collagen synthesis in aged skin. Numerous in vitro studies have shown the impact of mechanical forces on fibroblasts through mechanotransduction, which consists of the conversion of mechanical signals to biochemical responses. Such responses are characterized by the modulation of gene expression coding not only for extracellular matrix components (collagens, elastin, etc.) but also for degradation enzymes (matrix metalloproteinases [MMPs]) and their inhibitors (tissue inhibitors of metalloproteinases [TIMPs]). A new device providing a mechanical stimulation of the cutaneous and subcutaneous tissue has been used in a simple, blinded, controlled, and randomized study. Materials and methods Thirty subjects (aged between 35 years and 50 years), with clinical signs of skin sagging, were randomly assigned to have a treatment on hemiface. After a total of 24 sessions with Mécano-Stimulation™, biopsies were performed on the treated side and control area for in vitro analysis (dosage of hyaluronic acid, elastin, type I collagen, MMP9; equivalent dermis retraction; GlaSbox®; n=10) and electron microscopy (n=10). Furthermore, before and after the treatment, clinical evaluations and self-assessment questionnaire were done. Results In vitro analysis showed increases in hyaluronic acid, elastin, type I collagen, and MMP9 content along with an improvement of the migratory capacity of the fibroblasts on the treated side. Electron microscopy evaluations showed a clear dermal remodeling in relation with the activation of fibroblast activity. A significant improvement of different clinical signs associated with skin aging and the satisfaction of the subjects were observed, correlated with an improvement of the sagging cheek. Conclusion Mécano-Stimulation is a noninvasive and safe technique delivered by flaps microbeats at various frequencies, which can significantly improve the skin trophicity. Results observed with objective measurements, ie, in vitro assessments and electron microscopy, confirm the firming and restructuring effect clinically observed. PMID:25673979

  1. Clinical Outcomes of Comparing Soft Tissue Alternatives to Free Gingival Graft: A Systematic Review and Meta-Analysis
.

    PubMed

    Dragan, Irina F; Hotlzman, Lucrezia Paterno; Karimbux, Nadeem Y; Morin, Rebecca A; Bassir, Seyed Hossein

    2017-12-01

    This systematic review and meta-analysis aimed to compare clinical outcomes and width of keratinized tissue (KT) around teeth, following the soft tissue alter- natives and free gingival graft (FGG) procedures. The specific graft materials that were explored were extracellular matrix membrane, bilayer collagen membrane, living cellular construct, and acellular dermal matrix. Four different databases were queried to identify human controlled clinical trials and randomized controlled clinical trials that fulfilled the eligibility criteria. Relevant studies were identified by 3 independent reviewers, compiling the results of the electronic and handsearches. Studies identified through electronic and handsearches were reviewed by title, abstract, and full text using Covidence Software. Primary outcome in the present study was change in the width of KT. Results of the included studies were pooled to estimate the effect size, expressed as weighted mean differences and 95% confidence interval. A random-effects model was used to perform the meta-analyses. Six hundred thirty-eight articles were screened by title, 55 articles were screened by abstracts, and 34 full-text articles were reviewed. Data on quantitative changes in width of KT were provided in 7 studies. Quantitative analyses revealed a significant difference in changes in width of KT between patients treated with soft tissue alternatives and patients treated with FGGs (P < .001). The weighted mean difference of changes in the width of KT was 21.39 (95% confidence interval: 21.82 to 20.96; heterogeneity I 5 70.89%), indicating patients who were treated with soft tissue alternatives gained 1.39 mm less KT width compared with the patients who received free gingival graft. Based on the clinical outcomes, the results of this systematic review and meta-analysis showed that soft tissue alternatives result in an increased width of KT. Patients in the soft tissue alternatives group obtained 1.39 mm less KT compared with those in the FGGs group. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Randomized controlled clinical study evaluating effectiveness and safety of a volume-stable collagen matrix compared to autogenous connective tissue grafts for soft tissue augmentation at implant sites.

    PubMed

    Thoma, Daniel S; Zeltner, Marco; Hilbe, Monika; Hämmerle, Christoph H F; Hüsler, Jürg; Jung, Ronald E

    2016-10-01

    To test whether or not the use of a collagen matrix (VCMX) results in short-term soft tissue volume increase at implant sites non-inferior to an autogenous subepithelial connective tissue graft (SCTG), and to evaluate safety and tissue integration of VCMX and SCTG. In 20 patients with a volume deficiency at single-tooth implant sites, soft tissue volume augmentation was performed randomly allocating VCMX or SCTG. Soft tissue thickness, patient-reported outcome measures (PROMs), and safety were assessed up to 90 days (FU-90). At FU-90 (abutment connection), tissue samples were obtained for histological analysis. Descriptive analysis was computed for both groups. Non-parametric tests were applied to test non-inferiority for the gain in soft tissue thickness at the occlusal site. Median soft tissue thickness increased between BL and FU-90 by 1.8 mm (Q1:0.5; Q3:2.0) (VCMX) (p = 0.018) and 0.5 mm (-1.0; 2.0) (SCTG) (p = 0.395) (occlusal) and by 1.0 mm (0.5; 2.0) (VCMX) (p = 0.074) and 1.5 mm (-2.0; 2.0) (SCTG) (p = 0.563) (buccal). Non-inferiority with a non-inferiority margin of 1 mm could be demonstrated (p = 0.020); the difference between the two group medians (1.3 mm) for occlusal sites indicated no relevant, but not significant superiority of VCMX versus SCTG (primary endpoint). Pain medication consumption and pain perceived were non-significantly higher in group SCTG up to day 3. Median physical pain (OHIP-14) at day 7 was 100% higher for SCTG than for VCMX. The histological analysis revealed well-integrated grafts. Soft tissue augmentation at implant sites resulted in a similar or higher soft tissue volume increase after 90 days for VCMX versus SCTG. PROMs did not reveal relevant differences between the two groups. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Random matrix theory for analyzing the brain functional network in attention deficit hyperactivity disorder

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Wang, Li; Yang, Yong; Li, Jiajia; Wu, Ying; Lin, Pan

    2016-11-01

    Attention deficit hyperactivity disorder (ADHD) is the most common childhood neuropsychiatric disorder and affects approximately 6 -7 % of children worldwide. Here, we investigate the statistical properties of undirected and directed brain functional networks in ADHD patients based on random matrix theory (RMT), in which the undirected functional connectivity is constructed based on correlation coefficient and the directed functional connectivity is measured based on cross-correlation coefficient and mutual information. We first analyze the functional connectivity and the eigenvalues of the brain functional network. We find that ADHD patients have increased undirected functional connectivity, reflecting a higher degree of linear dependence between regions, and increased directed functional connectivity, indicating stronger causality and more transmission of information among brain regions. More importantly, we explore the randomness of the undirected and directed functional networks using RMT. We find that for ADHD patients, the undirected functional network is more orderly than that for normal subjects, which indicates an abnormal increase in undirected functional connectivity. In addition, we find that the directed functional networks are more random, which reveals greater disorder in causality and more chaotic information flow among brain regions in ADHD patients. Our results not only further confirm the efficacy of RMT in characterizing the intrinsic properties of brain functional networks but also provide insights into the possibilities RMT offers for improving clinical diagnoses and treatment evaluations for ADHD patients.

  4. Characterization of cancer and normal tissue fluorescence through wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2008-02-01

    The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.

  5. A multi-assets artificial stock market with zero-intelligence traders

    NASA Astrophysics Data System (ADS)

    Ponta, L.; Raberto, M.; Cincotti, S.

    2011-01-01

    In this paper, a multi-assets artificial financial market populated by zero-intelligence traders with finite financial resources is presented. The market is characterized by different types of stocks representing firms operating in different sectors of the economy. Zero-intelligence traders follow a random allocation strategy which is constrained by finite resources, past market volatility and allocation universe. Within this framework, stock price processes exhibit volatility clustering, fat-tailed distribution of returns and reversion to the mean. Moreover, the cross-correlations between returns of different stocks are studied using methods of random matrix theory. The probability distribution of eigenvalues of the cross-correlation matrix shows the presence of outliers, similar to those recently observed on real data for business sectors. It is worth noting that business sectors have been recovered in our framework without dividends as only consequence of random restrictions on the allocation universe of zero-intelligence traders. Furthermore, in the presence of dividend-paying stocks and in the case of cash inflow added to the market, the artificial stock market points out the same structural results obtained in the simulation without dividends. These results suggest a significative structural influence on statistical properties of multi-assets stock market.

  6. The matrix exponential in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Minnetyan, Levon

    1987-01-01

    The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.

  7. MIXOR: a computer program for mixed-effects ordinal regression analysis.

    PubMed

    Hedeker, D; Gibbons, R D

    1996-03-01

    MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models. These models can be used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design. For clustered data, the mixed-effects model assumes that data within clusters are dependent. The degree of dependency is jointly estimated with the usual model parameters, thus adjusting for dependence resulting from clustering of the data. Similarly, for longitudinal data, the mixed-effects approach can allow for individual-varying intercepts and slopes across time, and can estimate the degree to which these time-related effects vary in the population of individuals. MIXOR uses marginal maximum likelihood estimation, utilizing a Fisher-scoring solution. For the scoring solution, the Cholesky factor of the random-effects variance-covariance matrix is estimated, along with the effects of model covariates. Examples illustrating usage and features of MIXOR are provided.

  8. Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2001-01-01

    This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.

  9. A comparative clinical study of the efficacy of subepithelial connective tissue graft and acellular dermal matrix graft in root coverage: 6-month follow-up observation

    PubMed Central

    Thomas, Libby John; Emmadi, Pamela; Thyagarajan, Ramakrishnan; Namasivayam, Ambalavanan

    2013-01-01

    Aims: The purpose of this study was to compare the clinical efficacy of subepithelial connective tissue graft and acellular dermal matrix graft associated with coronally repositioned flap in the treatment of Miller's class I and II gingival recession, 6 months postoperatively. Settings and Design: Ten patients with bilateral Miller's class I or class II gingival recession were randomly divided into two groups using a split-mouth study design. Materials and Methods: Group I (10 sites) was treated with subepithelial connective tissue graft along with coronally repositioned flap and Group II (10 sites) treated with acellular dermal matrix graft along with coronally repositioned flap. Clinical parameters like recession height and width, probing pocket depth, clinical attachment level, and width of keratinized gingiva were evaluated at baseline, 90th day, and 180th day for both groups. The percentage of root coverage was calculated based on the comparison of the recession height from 0 to 180th day in both Groups I and II. Statistical Analysis Used: Intragroup parameters at different time points were measured using the Wilcoxon signed rank test and Mann–Whitney U test was employed to analyze the differences between test and control groups. Results: There was no statistically significant difference in recession height and width, gain in CAL, and increase in the width of keratinized gingiva between the two groups on the 180th day. Both procedures showed clinically and statistically significant root coverage (Group I 96%, Group II 89.1%) on the 180th day. Conclusions: The results indicate that coverage of denuded root with both subepithelial connective tissue autograft and acellular dermal matrix allograft are very predictable procedures, which were stable for 6 months postoperatively. PMID:24174728

  10. Comprehensive comparison of liquid chromatography selectivity as provided by two types of liquid chromatography detectors (high resolution mass spectrometry and tandem mass spectrometry): "where is the crossover point?".

    PubMed

    Kaufmann, A; Butcher, P; Maden, K; Walker, S; Widmer, M

    2010-07-12

    The selectivity of mass traces obtained by monitoring liquid chromatography coupled to high resolution mass spectrometry (LC-HRMS) and liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) was compared. A number of blank extracts (fish, pork kidney, pork liver and honey) were separated by ultra performance liquid chromatography (UPLC). Detected were some 100 dummy transitions respectively dummy exact masses (traces). These dummy masses were the product of a random generator. The range of the permitted masses corresponded to those which are typical for analytes (e.g. veterinary drugs). The large number of monitored dummy traces ensured that endogenous compounds present in the matrix extract, produced a significant number of detectable chromatographic peaks. All obtained chromatographic peaks were integrated and standardized. Standardisation was done by dividing these absolute peak areas by the average response of a set of 7 different veterinary drugs. This permitted a direct comparison between the LC-HRMS and LC-MS/MS data. The data indicated that the selectivity of LC-HRMS exceeds LC-MS/MS, if high resolution mass spectrometry (HRMS) data is recorded with a resolution of 50,000 full width at half maximum (FWHM) and a corresponding mass window. This conclusion was further supported by experimental data (MS/MS based trace analysis), where a false positive finding was observed. An endogenous matrix compound present in honey matrix behaved like a banned nitroimidazole drug. This included identical retention time and two MRM traces, producing an MRM ratio between them, which perfectly matched the ratio observed in the external standard. HRMS measurement clearly resolved the interfering matrix compound and unmasked the false positive MS/MS finding. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Differentiation of Streptococcus pneumoniae Conjunctivitis Outbreak Isolates by Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry▿

    PubMed Central

    Williamson, Yulanda M.; Moura, Hercules; Woolfitt, Adrian R.; Pirkle, James L.; Barr, John R.; Carvalho, Maria Da Gloria; Ades, Edwin P.; Carlone, George M.; Sampson, Jacquelyn S.

    2008-01-01

    Streptococcus pneumoniae (pneumococcus [Pnc]) is a causative agent of many infectious diseases, including pneumonia, septicemia, otitis media, and conjunctivitis. There have been documented conjunctivitis outbreaks in which nontypeable (NT), nonencapsulated Pnc has been identified as the etiological agent. The use of mass spectrometry to comparatively and differentially analyze protein and peptide profiles of whole-cell microorganisms remains somewhat uncharted. In this report, we discuss a comparative proteomic analysis between NT S. pneumoniae conjunctivitis outbreak strains (cPnc) and other known typeable or NT pneumococcal and streptococcal isolates (including Pnc TIGR4 and R6, Streptococcus oralis, Streptococcus mitis, Streptococcus pseudopneumoniae, and Streptococcus pyogenes) and nonstreptococcal isolates (including Escherichia coli, Enterococcus faecalis, and Staphylococcus aureus) as controls. cPnc cells and controls were grown to mid-log phase, harvested, and subsequently treated with a 10% trifluoroacetic acid-sinapinic acid matrix mixture. Protein and peptide fragments of the whole-cell bacterial isolate-matrix combinations ranging in size from 2 to 14 kDa were evaluated by matrix-assisted laser desorption ionization-time of flight mass spectrometry. Additionally Random Forest analytical tools and dendrogramic representations (Genesis) suggested similarities and clustered the isolates into distinct clonal groups, respectively. Also, a peak list of protein and peptide masses was obtained and compared to a known Pnc protein mass library, in which a peptide common and unique to cPnc isolates was tentatively identified. Information gained from this study will lead to the identification and validation of proteins that are commonly and exclusively expressed in cPnc strains which could potentially be used as a biomarker in the rapid diagnosis of pneumococcal conjunctivitis. PMID:18708515

  12. Local hemostatic matrix for endoscope-assisted removal of intracerebral hemorrhage is safe and effective.

    PubMed

    Luh, Hui-Tzung; Huang, Abel Po-Hao; Yang, Shih-Hung; Chen, Chien-Ming; Cho, Der-Yang; Chen, Chun-Chung; Kuo, Lu-Ting; Li, Chieh-Hsun; Wang, Kuo-Chuan; Tseng, Wei-Lung; Hsing, Ming-Tai; Yang, Bing-Shiang; Lai, Dar-Ming; Tsai, Jui-Chang

    2018-01-01

    Minimally invasive endoscope-assisted (MIE) evacuation of spontaneous intracerebral hemorrhage (ICH) is simple and effective, but the limited working space may hinder meticulous hemostasis and might lead to rebleeding. Management of intraoperative hemorrhage is therefore a critical issue of this study. This study presents experience in the treatment of patients with various types of ICH by MIE evacuation followed by direct local injection of FloSeal Hemostatic Matrix (Baxter Healthcare Corp, Fremont, CA, USA) for hemostasis. The retrospective nonrandomized clinical and radiology-based analysis enrolled 42 patients treated with MIE evacuation of ICH followed by direct local injection of FloSeal Hemostatic Matrix. Rebleeding, morbidity, and mortality were the primary endpoints. The percentage of hematoma evacuated was calculated from the pre- and postoperative brain computed tomography (CT) scans. Extended Glasgow Outcome Scale (GOSE) was evaluated at 6 months postoperatively. Forty-two ICH patients were included in this study, among these, 23 patients were putaminal hemorrhage, 16 were thalamic ICH, and the other three were subcortical type. Surgery-related mortality was 2.4%. The average percentage of hematoma evacuated was 80.8%, and the rebleeding rate was 4.8%. The mean operative time was 102.7 minutes and the average blood loss was 84.9 mL. The mean postoperative GOSE score was 4.55 at 6-months' follow-up. This study shows that local application of FloSeal Hemostatic Matrix is safe and effective for hemostasis during MIE evacuation of ICH. In our experience, this shortens the operation time, especially in cases with intraoperative bleeding. A large, prospective, randomized trial is needed to confirm the findings. Copyright © 2017. Published by Elsevier B.V.

  13. Xenogenous Collagen Matrix and/or Enamel Matrix Derivative for Treatment of Localized Gingival Recessions: A Randomized Clinical Trial. Part II: Patient-Reported Outcomes.

    PubMed

    Rocha Dos Santos, Manuela; Sangiorgio, João Paulo Menck; Neves, Felipe Lucas da Silva; França-Grohmann, Isabela Lima; Nociti, Francisco Humberto; Silverio Ruiz, Karina Gonzales; Santamaria, Mauro Pedrine; Sallum, Enilson Antonio

    2017-12-01

    Gingival recession (GR) might be associated with patient discomfort due to cervical dentin hypersensitivity (CDH) and esthetic dissatisfaction. The aim is to evaluate the effect of root coverage procedure with a xenogenous collagen matrix (CM) and/or enamel matrix derivative (EMD) in combination with a coronally advanced flap (CAF) on CDH, esthetics, and oral health-related quality of life (OHRQoL) of patients with GR. Sixty-eight participants with single Miller Class I/II GRs were treated with CAF (n = 17), CAF + CM (n = 17), CAF + EMD (n = 17), and CAF + CM + EMD (n = 17). CDH was assessed by evaporative stimuli using a visual analog scale (VAS) and a Schiff scale. Esthetics outcome was assessed with VAS and the Questionnaire of Oral Esthetic Satisfaction. Oral Health Impact Profile-14 (OHIP-14) questionnaire was used to assess OHRQoL. All parameters were evaluated at baseline and after 6 months. Intragroup analysis showed statistically significant reduction in CDH and esthetic dissatisfaction with no intergroup significant differences (P >0.05). The impact of oral health on QoL after 6 months was significant for CAF + CM, CAF + EMD, and CAF + CM + EMD (P <0.05). Total OHIP-14 score and psychologic discomfort, psychologic disability, social disability, and handicap dimensions showed negative correlation with esthetics. OHIP-14 physical pain dimension had positive correlation with CDH (P <0.05). OHIP-14 showed no correlation with percentage of root coverage, keratinized tissue width, or keratinized tissue thickness (P >0.05). Root coverage procedures improve patient OHRQoL by impacting on a wide range of dimensions, perceived after reduction of CDH and esthetic dissatisfaction of patients with GRs treated with CAF + CM, CAF + EMD, and CAF + CM + EMD.

  14. Targeting functional motifs of a protein family

    NASA Astrophysics Data System (ADS)

    Bhadola, Pradeep; Deo, Nivedita

    2016-10-01

    The structural organization of a protein family is investigated by devising a method based on the random matrix theory (RMT), which uses the physiochemical properties of the amino acid with multiple sequence alignment. A graphical method to represent protein sequences using physiochemical properties is devised that gives a fast, easy, and informative way of comparing the evolutionary distances between protein sequences. A correlation matrix associated with each property is calculated, where the noise reduction and information filtering is done using RMT involving an ensemble of Wishart matrices. The analysis of the eigenvalue statistics of the correlation matrix for the β -lactamase family shows the universal features as observed in the Gaussian orthogonal ensemble (GOE). The property-based approach captures the short- as well as the long-range correlation (approximately following GOE) between the eigenvalues, whereas the previous approach (treating amino acids as characters) gives the usual short-range correlations, while the long-range correlations are the same as that of an uncorrelated series. The distribution of the eigenvector components for the eigenvalues outside the bulk (RMT bound) deviates significantly from RMT observations and contains important information about the system. The information content of each eigenvector of the correlation matrix is quantified by introducing an entropic estimate, which shows that for the β -lactamase family the smallest eigenvectors (low eigenmodes) are highly localized as well as informative. These small eigenvectors when processed gives clusters involving positions that have well-defined biological and structural importance matching with experiments. The approach is crucial for the recognition of structural motifs as shown in β -lactamase (and other families) and selectively identifies the important positions for targets to deactivate (activate) the enzymatic actions.

  15. Background recovery via motion-based robust principal component analysis with matrix factorization

    NASA Astrophysics Data System (ADS)

    Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping

    2018-03-01

    Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.

  16. A micromechanics-based strength prediction methodology for notched metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1992-01-01

    An analytical micromechanics based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and post fatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  17. A micromechanics-based strength prediction methodology for notched metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1993-01-01

    An analytical micromechanics-based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three-dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and postfatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics-based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  18. Graphene as a Novel Matrix for the Analysis of Small Molecules by MALDI-TOF MS

    PubMed Central

    Dong, Xiaoli; Cheng, Jinsheng; Li, Jinghong; Wang, Yinsheng

    2010-01-01

    Graphene was utilized for the first time as matrix for the analysis of low-molecular weight compounds using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Polar compounds including amino acids, polyamines, anticancer drugs and nucleosides could be successfully analyzed. Additionally, nonpolar compounds including steroids could be detected with high resolution and sensitivity. Compared with conventional matrix, graphene exhibited high desorption/ionization efficiency for nonpolar compounds. The graphene matrix functions as substrate to trap analytes, and it transfers energy to the analytes upon laser irradiation, which allowed for the analytes to be readily desorbed/ionized and interference of intrinsic matrix ions to be eliminated. The use of graphene as matrix avoided the fragmentation of analytes and provided good reproducibility and high salt tolerance, underscoring the potential application of graphene as matrix for MALDI-MS analysis of practical samples in complex sample matrices. We also demonstrated that the use of graphene as adsorbent for the solid-phase extraction of squalene could improve greatly the detection limit. This work not only opens a new field for applications of graphene, but also offers a new technique for high-speed analysis of low-molecular weight compounds in areas such as metabolism research and natural products characterization. PMID:20565059

  19. Microstructural evolution and magnetic properties of ultrafine solute-atom particles formed in a Cu75-Ni20-Fe5 alloy on isothermal annealing

    NASA Astrophysics Data System (ADS)

    Kim, Jun-Seop; Takeda, Mahoto; Bae, Dong-Sik

    2016-12-01

    Microstructural features strongly affect magnetism in nano-granular magnetic materials. In the present work we have investigated the relationship between the magnetic properties and the self-organized microstructure formed in a Cu75-Ni20-Fe5 alloy comprising ferromagnetic elements and copper atoms. High resolution transmission electron microscopy (HRTEM) observations showed that on isothermal annealing at 873 K, nano-scale solute (Fe,Ni)-rich clusters initially formed with a random distribution in the Cu-rich matrix. Superconducting quantum interference device (SQUID) measurements revealed that these ultrafine solute clusters exhibited super-spinglass and superparamagnetic states. On further isothermal annealing the precipitates evolved to cubic or rectangular ferromagnetic particles and aligned along the <100> directions of the copper-rich matrix. Electron energy-band calculations based on the first-principle Korringa-Kohn-Rostocker (KKR) method were also implemented to investigate both the electronic structure and the magnetic properties of the alloy. Inputting compositions obtained experimentally by scanning transmission electron microscopy-electron dispersive X-ray spectroscopy (STEM-EDS) analysis, the KKR calculation confirmed that ferromagnetic precipitates (of moment 1.07μB per atom) formed after annealing for 2 × 104 min. Magneto-thermogravimetric (MTG) analysis determined with high sensitivity the Curie temperatures and magnetic susceptibility above room temperature of samples containing nano-scale ferromagnetic particles.

  20. Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2015-01-01

    We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.

Top